text
stringlengths 9
294k
|
---|
User:
• Matrices
• Algebra
• Geometry
• Graphs and functions
• Trigonometry
• Coordinate geometry
• Combinatorics
Suma y resta Producto por escalar Producto Inversa
Monomials Polynomials Special products Equations Quadratic equations Radical expressions Systems of equations Sequences and series Inner product Exponential equations Matrices Determinants Inverse of a matrix Logarithmic equations Systems of 3 variables equations
2-D Shapes Areas Pythagorean Theorem Distances
Graphs Definition of slope Positive or negative slope Determine slope of a line Equation of a line Equation of a line (from graph) Quadratic function Parallel, coincident and intersecting lines Asymptotes Limits Distances Continuity and discontinuities
Sine Cosine Tangent Cosecant Secant Cotangent Trigonometric identities Law of cosines Law of sines
Equations of a straight line Parallel, coincident and intersecting lines Distances Angles in space Inner product
Ratios, proportions, and percents
Percent of change
The percent of change is the ratio of the amount of change to the original amount.
• When an amount increases, the percent of change is a percent of increase.
• When the amount decreases, the percent of change is negative. You can also state a negative percent of change as a percent of decrease.
$Percent\;change=\frac{amount\;of\;change}{original\;measurement}x100$
or
$Percent\;change=\frac{New\;Value\;-\;Old\;Value}{Old\;Value}x100$
What is the percent of change from 30 to 24?
Percent of change = $\frac{New\;Value\;-\;Old\;Value}{Old\;Value}x100$ = $\frac{24-30}{30}x100$ = $\frac{-6}{30}x100$ = -0.2x100 = -20%
The percent of decrease is 20%.
What is the percent of change from 8 to 10?
Percent of change = $\frac{New\;Value\;-\;Old\;Value}{Old\;Value}x100$ = $\frac{10-8}{8}x100$ = $\frac{2}{8}x100$ = 0.25x100 = 25%
The percent of increase is 25%.
What is the percent of change from 10 to 36?
26 % increase
46 % decrease
260 % increase
260 % decrease |
# Chapter 2 - Section 2.3 - Formulas and Problem Solving - Exercise Set: 33
171 packages
#### Work Step by Step
We are given that the ballroom is a square and has sides that each measure 64 feet. Therefore, we know that the area of the ballroom is $64^{2}=64\times64=4096$ feet. The ballroom will require 4096 one-foot-square tiles, and each package contains 24 tiles. $4096\div24\approx170.6667$ We can round up to 171 packages, since we will need to use at least some of the tiles in the 171st package.
After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback. |
Floating Point Addition Example 1. mantissa_{a \times b} = 1.00110000111000101011011_2 = 2.3819186687469482421875_{10} Example (71)F+= (7x100+ 1x10-1)x101 \end{equation*}, \begin{equation*} 05 employee-record occurs 1 to 1000 times depending on emp-count. the value of exponent is: Same goes for fraction bits, if usually They are used to implement floating-point operations, multiplication of fixed-point numbers, and similar computations encountered in scientific problems. mantissa_a = 1.1011111000000000000000000000000000000000000000000000_2 FLOATING POINT ARITHMETIC IS NOT REAL Bei Wang [email protected] Princeton University Third Computational and Data Science School for HEP (CoDaS-HEP 2019) July 24, 2019. Overflow is said to occur when the true result of an arithmetic operation is finite but larger in magnitude than the largest floating point number which can be stored using the given precision. The allowance for denormalized numbers at the bottom end of the range of exponents supports gradual underflow. \end{equation*}, \begin{equation*} Let's consider two decimal numbers X1 = 125.125 (base 10) X2 = 12.0625 (base 10) X3= X1 * X2 = 1509.3203125 Equivalent floating point binary words are X1 = Fig 10 0.125. has value 1/10 + 2/100 + 5/1000, and in the same way the binary fraction. \end{equation*}, \begin{equation*} Examples : 6.236* 10 3,1.306*10- \end{equation*}, \begin{equation*} \end{equation*}, \begin{equation*} 1.22 Floating Point Numbers. Any floating-point number that doesn't fit into this category is said to be denormalized. 6th fraction digit whereas double-precision arithmetic result diverges In other words, leaving the lowest exponent for denormalized numbers allows smaller numbers to be represented. To understand the concepts of arithmetic pipeline in a more convenient way, let us consider an example of a pipeline unit for floating-point … NaN is the result of certain operations, such as the division of zero by zero. a \times b = -2.38191874999999964046537570539 a \times b = 0 10000000 00110000111000101011011_{binary32} exponent_b = -2 The exponent, 8 bits in a float and 11 bits in a double, sits between the sign and mantissa. Before being displayed, the actual mantissa is multiplied by 2 24, which yields an integral number, and the unbiased exponent is decremented by 24. This is a decimal to binary floating-point converter. 6.2 IEEE Floating-Point Arithmetic. exponent = 128 - offset = 128 - 127 = 1 The power of two, therefore, is 1 - 126, which is -125. The most significant bit of a float or double is its sign bit. The exponent field is interpreted in one of three ways. At the other extreme, an exponent field of 11111110 yields a power of two of (254 - 126) or 128. For the float, this is -125. If the radix point is fixed, then those fractional numbers are called fixed-point numbers. mantissa = 4788187 \times 2 ^ {-23} + 1 = 1.5707963705062866 \end{equation*}, \begin{equation*} For example: This suite of sample programs provides an example of a COBOL program doing floating point arithmetic and writing the information to a Sequential file. The best example of fixed-point numbers are those represented in commerce, finance while that of floating-point is the scientific constants and values. For example, an exponent field in a float of 00000001 yields a power of two by subtracting the bias (126) from the exponent field interpreted as a positive integer (1). The operations are done with algorithms similar to those used on sign magnitude integers (because of the similarity of representation) — example, only add numbers of the same sign. The last example is a computer shorthand for scientific notation.It means 3*10-5 (or 10 to the negative 5th power multiplied by 3). \end{equation*}, \begin{equation*} For example, we have to add 1.1 * 10 3 and 50. Subsequent articles will discuss other members of the bytecode family. The JVM throws no exceptions as a result of any floating-point operations. A noteworthy but unconventional way to do floating-point arithmetic in native bash is to combine Arithmetic Expansion with printf using the scientific notation.Since you can’t do floating-point in bash, you would just apply a given multiplier by a power of 10 to your math operation inside an Arithmetic Expansion, … The standard simplifies the task of writing numerically sophisticated, portable programs. In case of normalized numbers the mantissa is within range 1 .. 2 to take The most significant mantissa bit is predictable, and is therefore not included, because the exponent of floating-point numbers in the JVM indicates whether or not the number is normalized.
## floating point arithmetic examples
Weston Dehydrator Reviews, Nj Wildlife Management Areas Closed, How Do Gastropods Eat, Turtle Soup Rs3, Wholesale Organic Produce Price List, Pestel Analysis Ppt, |
Open access peer-reviewed chapter
# Epidemiology of Patients Diagnosed with Prescription and Non-Prescription Drug Overdose at the Riyadh Security Forces Hospital Between January 2007 and December 2011
By Naser Al-Jaser, M. Cli. Epi and Niyi Awofeso
Submitted: May 21st 2012Reviewed: August 30th 2012Published: May 15th 2013
DOI: 10.5772/52879
## 1. Introduction
There is global concern concerning the higher rate of drug overdose morbidity and mortality, particularly from opioid medicines.[1] Drug overdose is one of the leading causes of death in many countries.[2] In the US, prescription drug mortality rate is higher than the death rate from illicit drugs, and drug overdose mortality currently exceeds mortality from motor vehicle accidents.[3] Moreover, there has been a tenfold increase in painkiller prescriptions in the US over the past 15 years.[4] In Saudi Arabia, there has been a significant increase in the use of prescription drugs compared with the previous decade, as the Ministry of Health stated in its 2009 annual report.[5] A number of studies have been conducted that investigate the epidemiology of drug overdose in Saudi Arabia. However, most of these studies were conducted in the late twentieth century.[6,7]
The purpose of this research is to investigate prescription and non-prescription overdose cases admitted to the emergency department of the Security Forces Hospital, Riyadh, from 2007 to 2011. The study sought to identify demographic characteristics of patients who were admitted to the emergency department with drug overdose, including age, gender, income and occupation.
The findings of this study have a number of implications for the Security Forces Hospital and drug overdoses in Saudi Arabia, particularly for elderly patients who take Warfarin continuously. Further, it appears that parents leave their medications unsecured and unprotected from children; thus, preventive and awareness programs are needed to address these issues.
## 2. Literature review
An Adverse Drug Event (ADE) is defined as an injury resulting from medical intervention related to a drug.[8] It is considered a major problem in medicine because it results in hospital admissions. ADEs include harm caused by the drug, such as adverse drug reactions and overdoses, and harm resulting from using the drug, such as dose reductions and discontinuation of drug therapies.[9] Previous studies have found that ADEs account for 3.9–6.2 per cent of hospital admissions. Further, drug overdoses account for a higher hospital admission rate of ADEs.[10,11]
Drug overdose can be defined as intentionally or unintentionally administering a higher dose of prescription or non-prescription drugs than recommended.[12] Drug overdose is considered a major health problem, particularly in developed countries. In the United States (US), the Centers for Disease Control and Prevention (CDC) recently reported that fatal overdoses from opiate painkillers currently exceed those from cocaine and heroin combined.[12] The rate of prescription drug use is increasing globally.[13] In Saudi Arabia, there has been a significant increase in prescription drug use since 2000 compared with the previous decades; however, there is a dearth of information relating to drug use and overdoses.[5]
In many Asian countries, drug overdose mortality is considered a major problem. For example, a study in northern Thailand which investigated the overdose mortality rate of injecting drug users between 1999 and 2002 found a death rate of 8.97 per 1,000 people among 861 drug users who were Human Immunodeficiency Virus (HIV)-negative.[14] A study in Xichang City, China, found a heroin overdose mortality rate of 4.7 per 100 people among 379 people who injected drugs during 2002 to 2003.[15] Further, in a review conducted in several central Asian countries, emergency medical services stated that there were 21 drug overdose deaths in Tajikistan and 57 in Kyrgyzstan in 2006.[1]
Many European countries also consider drug overdose a major concern, and it is considered one of the leading causes of death. The average mortality rate is 21 deaths per 1 million people aged 15–64.[16] Drug overdose in Europeans aged 15–39 accounted for 4 per cent of all deaths. Males were at a greater risk than females in all countries, with males accounting for 81 per cent of all drug-related deaths reported in European countries. The male to female ratio varied across countries, with the lowest rate in Poland (4:1) and the highest rate in Romania (31:1). The most common drugs used in almost all countries were opioids, which accounted for 90 per cent of drugs used in five countries and 80–90 per cent in 12 countries.[16]
Drug overdose is also considered a major public health threat in the US. There, the drug overdose mortality rate among adults increased from 4 per 100,000 people in 1999 to 8.8 per 100,000 in 2006. Moreover, deaths from drug overdose increased from 11,155 in 1999 to 22,448 in 2005, which can be attributed mainly to prescription drugs rather than illicit drugs.[3] Drug overdose is the second leading cause of death among all unintentional deaths in the US. The most common drugs that caused death by overdose were heroin, cocaine and painkillers. The use of prescription medicines has increased, thus contributing to the death rate.[4] According to the CDC, from 2005 to 2007, prescription drugs such as benzodiazepine, anti-depressants and opioid medicines were found in 79 per cent (2,165 cases) of all substance overdoses.[17]
In Australia, there appears to be a lower risk of drug overdose than in other countries. For instance, the rate of death from opioids was 101.9 per 1 million people in 1999 and 31.3 deaths per 1 million in 2004.[18] Moreover, in 2005, the Illicit Drug Reporting System distributed a survey among intravenous drug users and found that 46 per cent had experienced an overdose.[18] It was also found that 357 deaths were caused by opioid overdose and 40 per cent of deaths occurred in New South Wales. Males accounted for 75 per cent of overdose deaths, and those aged 25–34 were most at risk, accounting for 40 per cent of deaths.[19] Recently released Australian prisoners are at significantly increased risks of illicit drug overdose and deaths.[20]
In Saudi Arabia, studies have noted an increase in drug overdoses in localised cohorts over the past several decades. However, there are no significant statistics for drug overdose morbidity and mortality in Saudi Arabia as a whole.[6,21] Several studies have been undertaken in Saudi Arabia to investigate the drug overdose in hospitals. Moazzam’s and Aljahdali’s studies found that paracetamol accounted for 24.1 per cent of 170 drug overdose cases and 30 per cent of 79 cases, respectively.[21,22] Ahmed’s study found that mefenamic acid accounted for 20 per cent of 50 cases investigated.[6] The rate of death amongst drug overdose was investigated in some studies in Saudi Arabia. Ahmed stated that there was one death among 106 drug overdoses admitted between 1992 and 1994.[6] Elfawal investigated 249 deaths from substance overdose between 1990 and 1997, and found 20 per cent of cases related to medically prescribed drugs.[7] Aljahdali and Ahmed found females accounted for a higher percentage of drug overdose cases.[6,22] Moazzam and Elfawal found males were represented in a higher percentage of cases.[7,21]
Drug overdoses could result from non-prescription substances such as herbal medicines.[23] The problem with herbal remedies relates to limited control and regulation among stores that provide them.[24] Many people believe that herbal substances are harmless and that it is safe to administer excessive amounts because they come from natural sources.[25] Although the rate of usage has increased, fewer than half of patients consult their physicians before administrating herbal remedies.[26] Further, the accurate dosage of herbal medicines is variable, and there are no guidelines to determine correct dosage.[25]
Drug overdoses could result from administering illicit drugs such as heroin and hashish.[27,28] As Saudi Arabia is a strict Islamic country, and Islam prohibits the use of illicit drugs, overdose cases involving illicit drugs are rare.[22] However, according to a world drug report, Saudi Arabia is considered a major market of phenethylline (Captagon) in the Middle East. The Saudi government confiscated more than 10 million pills in one seizure in 2010. However, the prevalence of amphetamines in Saudi Arabia is low compared with other western countries: in 2006, the prevalence of amphetamines in Saudi Arabia was 0.4 per 100,000 people, whereas in Australia and the US, the prevalence was 2.7 and 1.5 per 100,000 people respectively. Further, the prevalence of opioids and cannabis was 0.06 and 0.3 per 100,000 respectively in Saudi Arabia, 0.4 and 10.6 per 100,000 respectively in Australia and 5.9 and 13.7 per 100,000 respectively in the US. Therefore, the prevalence of opioids and cannabis are markedly lower in Saudi Arabia than in Australia and the US.[13]
Suicide is one of the major motivations and outcomes of intentional drug overdose.[29] Suicide accounts for 2 per cent of all deaths in the world. In 2005, there were about 800,000 deaths from suicide, and about 56 million deaths globally.[30] Drugs cause 11 per cent of suicides in Australia.[30] A study found that suicide is a greater risk among people who had a history of drug overdoses compared with people who did not.[31] Another study found a positive correlation between suicide and drug overdose.[32] Moreover, research has found that committing suicide by administering drugs is common among adolescents.[33] One study found suicide was associated with both prescription and non-prescription drugs, with a strong association between opiates and suicide, and opioid users were 14 times more likely to attempt suicide compared with non-opioid users.[34]
The excessive availability of medicines in households is due to the relative affordability of drugs, which can be bought from a range of places including markets, internet pharmacies and cosmetic stores. For instance, patients can purchase prescription drugs from an internet pharmacy without a prescription.[35] One survey investigated how easy it was for adolescents to acquire prescription medications. The question asked was ‘which is easiest for someone your age to buy: cigarettes, beer, marijuana or prescription drugs without prescription?’ Nineteen per cent of respondents said it was easier to buy prescription drugs compared to 13 per cent in the previous year.[36]
Two main factors contribute to the excessive availability of medicines: physicians and patients. Physicians appear to prescribe more medicines than in the past. For example, there was a 300 per cent increase in the prescription of painkillers in the US in 1999.[35,37] According to the National Institute on Drug Abuse (NIDA), the number of potentially addictive drug prescriptions for pain rose to 200 million in 2011.[38] There is also an association between patient death and physicians who frequently prescribe painkillers. Dhalla published a study in Ontario in 2011 that investigated the opioid prescription rate in family physicians and their relation to opioid-related deaths. He found that 20 per cent of physicians have a prescription rate that is 55 times higher than the 20 per cent of physicians who prescribed the lowest. The top 20 per cent of physicians were responsible for 64 per cent of patient deaths caused by painkillers.[39] In addition, many people falsely reporting symptoms in order to obtain a prescription and this is defined as drug seeking behaviour. The most drugs associated with drug seeking behaviour are benzodiazepine and opioids.[40]
Alcoholism is considered a major risk factor for intentional overdoses. Several studies state that the risk of drug overdose from prescription medicines is higher among people who drink alcohol.[41-43] A study by Li in 2011 investigated trends of paracetamol overdose in US emergency departments from 1993 to 2007 using data from physicians’ diagnoses codes and cause of injury codes. The author found those who drank alcohol were 5.48 times more likely to overdose compared to people who did not drink alcohol, and the p-values were statistically significant.[41]
A study published by Wazaify in 2005 examined OTC drugs and prescription drug overdoses for three months, as well as the potential risk factors. The study investigated 247 overdose cases, excluding alcohol intoxication and spiked drinks. He found alcohol was a major risk factor for overdoses of both OTC and prescription drugs, and that alcohol contributed more to OTC drug overdoses (32.2 per cent) than prescription drugs combined with OTC drugs (24.7 per cent).[42] Moreover, the prescription drug overdose death rate increased with alcohol consumption. West Virginia found 32.9 per cent of overdose deaths were associated with alcohol consumption.[43] Another study on paracetamol overdose found more than one-third of drug overdoses were associated with alcohol consumption at the time of overdose, and it was slightly higher in males (12 per cent) than females (11 per cent).[44] In addition, people who consumed alcohol could overdose on lower doses of paracetamol compared with those who did not consume alcohol.[45] Paulozzi conducted a study on methadone overdose and found that the concentration of methadone was lower when alcohol was involved.[28] Mixing drugs with alcohol is therefore considered a risk factor for drug overdose.[46]
Violence involving sex and family could also be associated to intentional drug overdose.[33,47] A study by Budnitz investigated the pattern of acetaminophen overdoses in the emergency department from two components of the National Electronic Injury Surveillance System. Of the 2,717 annual acetaminophen overdose cases, 69.8 per cent were related to self-directed violence. Further, females had a greater rate of self-directed violence (27.2 per 100,000) compared with males (14.4 per 100,000).[33] Violence and strife have also contributed to the increased rate of illicit drug use in the US.[47]
Drug overdoses can be associated with people who take drugs for recreational purposes. According to the Centers for Disease Control and Prevention (CDC), opioids are involved in more overdose deaths than heroin and cocaine combined, and they are often associated with recreational use.[4] Further, several studies found that recreational use contributed to many of the drug overdoses presenting to emergency departments. For example, a study found that 15.4 per cent of 500 overdose cases presented to emergency departments resulted from recreational use.[48] Further, a survey of 975 students found that 16 per cent abused medicine for recreational purposes.[49]
Buykx found that many people overdose on drugs after they experience interpersonal conflicts.[31] Britton’s 2010 study investigated the risk factors of non-fatal overdoses over 12 months. The author recruited 2,966 participants and found that 23.5 per cent of all overdose cases had experienced sexual abuse. Moreover, victims of sexual abuse were 2.02 times more likely to overdose, and the result was statistically significant.[50] Other forms of physical abuse were also addressed in the study: 33.4 per cent of all overdose cases had experienced physical abuse, and they were 1.91 times more likely to overdose, which was statistically significant.[50]
The level of a medicine’s purity could lead to a drug overdose, particularly for people using non-prescription medicines. Previous studies have found the fluctuation of heroin purity contributed to the overdose rate.[51] Moreover, in a survey of healthcare providers that asked about risk factors for opioid overdose, approximately 90 per cent mentioned the fluctuation of opioid purity.[52] Of 855 heroin users, 29 per cent split the tablets in half when the purity was unknown.[53] In addition, a study stated that many heroin users believed that purity fluctuation contributed to drug overdose.[54] Conversely, several studies on heroin (e.g. Toprak and Risser) found no association between drug overdose and purity.[55,56]
Other factors that contribute to intentional drug overdose include psychiatric illness, marital problems and family size.[6] Ahmed found that psychiatric illness was a greater risk among males than females, and it was a risk factor in 10 of the 50 cases he investigated. Further, five cases had experienced marital problems.[6] Family size could be a major factor in drug overdose. Large families are common in Saudi culture. A 2011 study by Bani found that 43 per cent of participants had six to eight family members.[57] A study by TNS Middle East of demographic characteristics in Saudi Arabia in 2006 found that 40 per cent of Saudi families are considered large, with six or more members.[58] Aljahdali found that large family size was a risk factor in drug overdoses: 59 per cent of the 79 cases in his study had more than five family members This could indicate that because large families have more children, parental supervision amongst the children is lowered, potentially increasing the chance of the unsupervised ingestion of drugs.[22]
A previous drug overdose might also be a risk factor for another drug overdose, as many studies have attested.[59],[60],[46] For example, Kinner’s study in 2012 investigated the risk factors of non-fatal overdoses among illicit drug users, recruiting 2,515 illicit drug users in Vancouver, Canada. The author found an association between drug overdoses and previous drug experiences; people with previous overdoses were four times more likely to overdose compared with people who had no previous experience.[59] This finding is similar to that of a study by Hall in 2008, which investigated the pattern of unintentional drug overdose caused by prescription drugs, recruiting 295 participants. The author found that people who had experienced a previous overdose had a 30.2 per cent chance of overdose compared with 14.4 per cent of people who had not.[60] In addition, a New York study that investigated the risk factors of heroin users found that participants who had overdosed were 28 times more likely to overdose than those who had not experienced a previous overdose.[46] In contrast, some previous studies found no associations between drug overdose and previous overdose experience.[59]
Doctor shopping is considered the most common method of obtaining prescription drugs for legal and illegal use.[61-64] It is defined as patients visiting several doctors to obtain prescription medicines without medical need, and it is considered one of the major mechanisms of diversion.[35] Several studies have found that doctor shopping contributes to drug overdose. For example, the author Hall found that doctor shopping contributed to 21.4 per cent of 259 overdose cases.[65] Further, it found that 19 per cent of participants who overdosed acquired their medicines through doctor shopping.[49] Moreover, doctor shopping is attributed to a higher rate of drug overdose death.[65,66] Several studies have stated that controlling doctor shopping would assist in preventing drug overdoses.[35,67]
The consumption of prescription drugs, especially opioids, has increased due to their euphoric and energising effects.[4] For example, methamphetamine and alprazolam users tend to redose every three-to-eight hours to maintain the euphoric effect.[46] Further, drug users tend to abuse cocaine to feel euphoric and increase a feeling of sexual desirability.[68] Some medicines do not enhance euphoria until taken in higher doses. For example, drug users take higher doses of benzodiazepine to experience the euphoria effect.[24] Many fatal overdoses occur when larger doses of medicines have been taken to achieve the euphoric effect.[37]
Long-term therapy could be related to overdoses, especially in patients suffering from chronic pain. Further, such patients have easy access to painkillers in the home, which increases the chance of a fatal overdose.[69] Previously, long-term therapy was restricted to cancer patients; however, currently, it is commonly used for chronic pain in non-cancer patients. Unfortunately, the latter have been associated with higher overdose rates.[70] One of the reasons for drug overdose in chronic patients is inadequate pain management.[69,70] The critical issue with chronic pain is pain management, and inadequate pain management could lead to increased doses of painkillers and consequently, an increased rate of drug overdoses.[71]
Calculating the dose is an important factor, and miscalculated doses could lead to unintentional overdoses.[72] Many parents have difficulty measuring and calculating the appropriate dose of paracetamol for their children.[73] One survey asked 100 caregivers if they were able to determine the appropriate dose for their children; only 30 per cent were able to do so.[73] Hixson conducted a study in 2010 to compare the ability of parents to calculate the appropriate dose of acetaminophen using product information leaflets or the Parental Analgesia Slide. Participants were divided into two groups, and a questionnaire was distributed to each group. The author found that caregivers using the Parental Analgesia Slide had fewer dosing errors than caregivers using product information leaflets, but the difference was not statistically significant.[74] Limited literacy and numeracy skills are also associated with poor clinical outcomes and overdoses. Many people with limited numeracy skills are confused with dosing instructions and warning labels. Moreover, people could be confused with the information on the labels of prescription medicines.[75]
Mental states could be a major risk factor of drug overdose, as patients with mental disorders and drug addictions are more vulnerable.[76] For example, Hasin’s study found that 15–20 per cent of patients with mental disorders overdosed on drugs at least once in their lives, and patients with depressive disorders were 3.7 times more likely to overdose.[76] Fischer’s study found that people with mental problems were 1.51 times more likely to overdose than people without mental problems, but this result was not statistically significant.[77]
Children are considered at greater risk of drug overdose for several reasons. Inappropriate storage and disposal of medicines can contribute to this risk.[78] For example, according to the CDC and Prevention, one of the main causes of drug overdose reported to emergency departments is the unsupervised ingestion of OTC and prescription medicines. Further, the CDC stated that of the 72,000 overdose cases presented to emergency departments in 2004, more than 26,000 were caused by OTC drugs.[79] Additionally, Li’s study found that children under the age of five accounted for a higher percentage of drug overdose cases in emergency departments,[41] while another study which investigated 3,034 overdose cases among children found 97 per cent of the cases resulted from the unsupervised ingestion of drugs.[80]
Older age is associated with a higher drug overdose rate for several reasons. First, elderly people aged 65 years and over tend to have more medical problems; thus, they may take many medicines that might interact with each other and cause an overdose.[79] Second, many elderly people live independently and might find it difficult to calculate the correct dose. In addition, they may not recognise the symptoms of drug overdose when it occurs.[81] Suicide attempts are common among elderly people by taking an excessive amount of a drug. Several factors contribute to suicide attempts, such as old age, failing physical and mental health, reduced income and social support.[82]
Maintaining a dose is an important factor in preventing intentional overdoses among chronic patients.[83,84] When medicines such as Warfarin have a narrow therapeutic index, it is critical to adjust the appropriate dose.[83] Physicians prefer not to dispense Warfarin because of the uncertainty of patient compliance with monitoring, dietary implications and the fear of haemorrhagic complications.[85] The initial dose is challengeable, which could result in bleeding, and many patients might overdose at the beginning because they might have Warfarin sensitivity or a poor metabolism, thus requiring reduced dose. The maintenance of the dose depends on several factors, such as weight, diet, disease state and concomitant use of other medications, as well as genetic factors.[84] Genetic variability is considered a major factor in determining Warfarin overdose. There are two genes which are cytochrome P450, family 2, subfamily C, polypeptide 9 (CYP2C9), and vitamin K epoxide reductase complex, subunit 1 (VKORC1) contributing significantly to the variability among patients in dose requirements for Warfarin.[84],[86]
Misunderstanding and misreading the abbreviation of prescriptions can lead to medication errors and overdoses. One report demonstrated that a woman had a severe digoxin overdose because her nurse misread the pharmacist’s instructions. The pharmacist had used the abbreviation (=), which was unclear because the pen had trailed ink.[87] Maged conducted a study in Saudi Arabia in 2010 to investigate medication errors in prescription medicines. Of the 529 dosing errors, the author found that 46 per cent caused overdoses. The two main errors were the route and frequency of the medicines.[88] Further, many parents have difficulty understanding the instructions to administer appropriate doses for their children. A study in 2008 examined caregivers’ understandings of the age indication of OTC drugs and cough medications. Of the 182 participants whose misunderstood dose instruction, the author found that more than 80 per cent had given medicine to their infants when they should have consulted a physician first.[75]
Many people believe that using excessive amounts of OTC medicines is safe and effective. Some people believe that if a medicine is OTC, it is safe to consume in large quantities.[33],[44] For example, paracetamol is considered a safe medication. However, it has a narrow therapeutic index, so the dangerous dose is close to the recommended dose, and an excessive dose could lead to liver toxicity.[33] Simkin’s study found that 20 per cent of the 60 participants did not know the dose that could cause death, and 15 per cent believed that 100 tablets or more would cause death.[44] Advertising and media could contribute to the excessive amounts of OTC drugs administered; for example, advertisements could suggest that the consumption of large amounts is effective before seeing a doctor.[89] Wazaify claims that there is aggressive marketing and advertising for OTC medicines.[42]
There is a higher risk of drug-related deaths among recently released prisoners,[20,90] who are associated with overdoses in the first few weeks after release.[90,91] Many studies state that the leading cause of death for recently released prisoners is accidental drug overdose.[92] For example, a study found that recently released prisoners have an overdose rate that is 12 times higher than the general population.[91] In addition, another study found that the overdose mortality rate is three-to-eight times higher in the first two weeks after release compared to the subsequent 10 weeks.[20] The reasons for higher overdose rates are not understood; however, previous studies have suggested that possible reasons include poor housing, unemployment, psychosocial problems and barriers to health care.[93-95]
Another major factor related to the increase in drug overdose rates is the lack of education, which includes the education of healthcare providers, miscalculation of doses, and limited literacy and numeracy.[35,45,67] Manchikant states that many healthcare providers, such as physicians and pharmacists, do not have adequate education regarding drug misuse.[35] In 2012, Taylor investigated the pattern of acetaminophen overdose in the military and found that a lack of education was a risk factor.[45] The CDC stated that the majority of healthcare providers have the minimum education background regarding prescription drug misuse, and they could prescribe addictive medicines without being aware of the risks involved.[67] Wallace demonstrated that physicians have limited knowledge in detecting, investigating and managing acetaminophen overdoses. Further, Wallace’s study proved that the management of overdoses improved when physicians had more knowledge. A management flowchart for paracetamol poisoning was introduced to help physicians treat overdose cases.[96]
Income could be a major factor in drug overdose. People with low incomes could have lower education and numeracy levels compared to those with higher incomes. This is supported by Lokker’s study of parental misinterpretations of OTC medication labels, which found that 42 per cent of parents who misinterpreted the labels had an income of less than $20,000 per annum.[75] Further, people with low incomes had more motivation to misuse prescription medicines compared to those with higher incomes.[97] In addition, low-income people were six times more likely to overdose on prescription painkillers, and a US study found that low-income people accounted for 45 per cent of prescription overdose deaths.[37] The CDC also noted that low-income people are at a greater risk of drug overdose.[67] In contrast, Yu’s study in 2005 investigated drug misuse admissions to the emergency department in a large metropolitan teaching hospital in Taipei, Taiwan. The author found that those on high-incomes were more likely to misuse drugs than low and medium income people, and the result was statistically significant.[98] Another study by Hall, which examined the pattern of unintentional drug overdoses, categorised participants’ incomes into four quartiles. He found that the higher-quartile income had a greater risk of drug overdose (24.7 per 100,000 people) than the other quartiles. Further, doctor shopping is related to the higher quartile; 58 per cent of doctor shoppers were represented by the higher-quartile income.[60] Paulozzi’s study also categorised income into four quartiles and found that the higher-quartile income was at a greater risk of death from methadone overdose (29.9 per cent) and other opioid analgesics (33.1 per cent).[28] Adverse Drug Reaction (ADR) consider as the fifth leading cause of death and illness in the developed world with direct medical costs estimated to be US$30–130 billion annually in the US and claiming 100,000-218,000 lives annually.[99] Despite this, health-related associations estimate that 95 per cent of all ADRs in Canada and the US are not reported.[100] Many drugs have caused adverse drug reactions after there have been proved, and this were attributed to the drug safety issue. For example in Canada, 3–4 per cent of drugs approved will eventually be withdrawn from the market because of safety issues, Faster approval of new drugs has the potential to produce more safety problems once drugs are on the market. Many agencies have launched post marketing surveillance and pharmacosurveillance systems, and these are aimed to generate safety signals for marketed drugs.[101]
Identifying patterns of drug overdose will help to implement evidence-based policies. In a study in the UK on the effects of the withdrawal of Distalgesic (a prescription-only analgesic compound) from the market, the author found an 84 per cent reduction in intentional drug overdoses presenting to emergency departments in hospitals compared with the three years before the drug was withdrawn. Further, there was a marked reduction in tablet sales after the medicine was withdrawn, from 40 million in 2005 to 500,000 in 2006. Thus, identifying drugs that are commonly involved in overdoses will help in reducing the overdose rate.[102]
## 3. Methodology
An emergency department visit for drug overdose was the primary outcome measure, including unintentional and intentional overdoses. Drug overdoses are identified by physicians in the emergency department using the terms overdose, poisoning and drug relayed problem. Secondary measures include the patient’s age, gender, interior personal occupation, Length of Stay (LOS) in the emergency department, patient type, drug level, previous admission, previous overdose and measurement outcome.
In this research, participants are categorised into three groups. First, interior personnel are identified as people who work in the Ministry of Interior. The second group is interior personnel relatives, where each employee has the right to have his family treated in the hospital. The third group is called exceptional people; many people do not belong to the Ministry of Interior, but they seek treatment in the hospital because they have acquired an exceptional letter, as they require special health intervention.
Overdose cases obtained in the study are divided according to prescription and non- prescription drugs in order to test the hypothesis of the study. Moreover, the number of medicines involved in the cases is addressed and the drugs are categorised into three groups: single, double and triple. In addition, drug level is addressed in the study and it is standardised to moderate or severe according to the level of drug in the body. The LOS of patients was determined by calculating the period between the time of admission and the time of discharge from the emergency department, or the time when admitted to the inpatient management is included in the study and all cases are divided into two categories: discharged from the emergency department and admitted to the inpatient department.
Descriptive statistics such as frequencies and cross-tabulations were obtained to describe the various motives reported by the sample. All drugs involved in overdoses were obtained, as well as their frequency, to identify the medicine that accounted for the highest percentage. Further, the medicines used in overdoses were tabulated according to their medical indication of use and then each medicine was categorised according to their medical indication group. Fisher and chi-square tests were conducted to test the differences between categorical variables. As the chi-square prefers two-by-two tables and each cell must have at least five cells, the patients’ type needed to be re-categorised into two groups: interior and non-interior. Although the patients’ type was allocated to two groups, one of the cells had less than five, so a Fisher exact test was used.
The data was obtained from medical records, which raises the issue of confidentiality. However, the anonymity of participants will be protected, and only de-identified data was accessed. A letter was obtained from the hospital to ensure the anonymity of the research. The data was obtained only from files that were considered essential for the research. No patient was contacted as part of this study. Each participant will have a unique three-digit code. The data collection complies with the National Health and Medical Research Council’s National Statement on Ethical Conduct in Human Research. Further, the study has been approved by the Security Forces Hospital’s Research Committee. In addition, the UWA’s Human Research Ethics Committee has approved this study.
## 4. Results
### 4.1. Demographic characteristics
One hundred and forty drug overdose cases were admitted to the emergency department of the Riyadh Security Forces Hospital between 1 January 2007 and 31 December 2011. Table 1 describes the demographic characteristics of patients associated with drug overdose, and the findings are discussed below. Females accounted for 57.90 per cent of cases and males accounted for 42.10 per cent. In this study, there is a variety in age distribution, with patients aged between 11 months and 86 years. The patients’ ages were divided into seven groups: (0.01–1.12), (2.00–9.12), (10.00–19.12), (20.00–29.12), (30.00–44.12), (45.00–59.12) and (over 60 years). This study demonstrates that groups (2.00–9.12 years) and (over 60 years) accounted for the highest percentage of drug overdose cases (22.9 per cent).
Characteristics Number Per cent Gender: Male 59 42.1 Female 81 57.9 Age groups: 0.01–1.12 years 8 5.7 2.00–9.12 years 32 22.9 10.00–19.12 years 9 6.4 20.00–29.12 years 25 17.9 30.00–44.12 years 19 13.6 45.00–59.12 years 15 10.7 Over 60 years 32 22.9 Type: Interior Personnel 30 21.4 Relatives 105 75.0 Exceptional people 5 3.60 Income groups: Less than 22,000 USD 37 27.4 22,001–45,000USD 69 51.1 45,001–67,001USD 18 13.3 More than 67,001 USD 11 8.1
### Table 1.
Socio-demographic characteristics of drug overdose cases.
The interior personnel relatives group accounted for the highest percentage of cases (n=105, 75 per cent), and interior personnel and exceptional people accounted for 21.4 and 3.60 per cent respectively. Income was divided into four groups that were represented by United State Dollar (USD) per annum: (less than 22,000 USD), (22,001–45,000USD), 3 (45,001–67,001USD) and (more than 67,001 USD). The study showed that (22,001–45,000USD) group represents the highest percentage of participants.
According to Table 2, 96.4 per cent of all drug overdose cases reported to the emergency department between January 2007 and December 2011 were caused by prescription medicines. Previous overdoses were addressed in the study, and only eight patients were found to have previous overdose experiences. Further, the study found that 53.6 per cent of cases were associated with previous admission, and patients with one previous admission represented 20 per cent of all participants who had been admitted previously. In the study, some patients used more than one drug to overdose. It found that 91.4 per cent of patients were taking one drug, while double and triple drugs accounted for 7.9 per cent and 0.7 per cent respectively. In addition, 67.5 per cent of the cases were found to have moderate drug levels, while severe drug levels accounted for 26.4 per cent of cases.
LOS groups were categorised into the following: (less than five hours), (5.01–10.01 hours), (10.01–15.00 hours), (15.01–20.00 hours), (20.01–35.00 hours) and (over 40 hours). It found that 50.0 per cent of all cases reported to the emergency department stayed for less than five hours, and these cases were either discharged or transferred to the inpatient admission department. The study found that 106 drug overdose cases were referred to the inpatient admission department.
Interior relatives accounted for 75 per cent of all overdose cases in the study. It found that 28.6 per cent of the cases were aged 2–9.12 years. Further, 54 per cent of the participants’ income was between (22,001–45,000USD) per annum. Eight cases were associated with previous overdose cases, and seven of them were relatives. Moreover, 49.5 per cent of cases stayed in the emergency department for less than five hours. Further, there were 34 discharged cases in the study, 28 of which were relative cases.
The outcome of a drug overdose is statistically different between patients’ type (one-sided p-value = 0.007). It found that the inpatient admission department accounted for 93.3 per cent for all interior personnel cases, while non-interior people who were relatives and exceptional people accounted for 78 per cent of the cases, and the difference of outcome management among patient type is significant. By using the Fisher exact test, previous admission is statistically relevant to patient type (one-sided p-value = 0.033). It found that 70 and 50.1 per cent of interior and non interior cases were associated with previous admission; thus, the difference is significant. The difference between drug level and outcome management was tested using a chi-square test, and a significant difference was found. It found that 72.6 per cent of moderate-level cases were admitted to the inpatient department and 91.9 per cent of severe-level cases were admitted to the inpatient department. The management outcome from admission is statistically relevant to the level of drug (one-sided p-value = 0.008). The difference between gender and patients’ type was tested using the Fisher exact test; thus, gender is statistically relevant to patients’ type (one-sided p-value = 0.000).
Characteristics Number Per cent Previous overdose: Yes 8 5.7 No 132 94.3 Previous admission: Yes 75 53.6 No 65 46.4 Number of previous admissions: 0 65 46.4 1 28 20.0 2 21 15.0 3 8 5.7 4 6 4.3 5 4 2.9 7 and more 8 5.6 Drug kind: Prescription 135 96.4 Non-prescription 5 3.6 Drug combination: Single drug 128 91.4 Double drugs 11 7.9 Triple drugs 1 0.7 Drug level: Moderate 95 67.9 Severe 37 26.4 LOS groups: Less than five hours 70 50.0 5.01–10.01 hours 24 17.1 10.01–15.00 hours 12 8.6 15.01–20.00 hours 13 9.3 20.01–35.00 hours 10 7.1 Over 40 hours 11 7.9 Outcome management: Discharge 34 24.3 Inpatient admission 106 75.7
### Table 2.
Characteristics of drug overdose cases
### 4.2. Drug overdose percentages and rates
The means of LOS age and income per annum of patients in the emergency department are addressed, and it was found that the average LOS was around 11 hours, average age was 33 years and four months, and average income was around 35,951 USD.
The number of drug overdose cases was calculated for each year of the study. The annual number of emergency admissions was requested from the medical records department to identify the rate of drug overdose cases among all emergency cases. All results are shown in Table 3. According to the results, the rate of drug overdose reduced between 2007 and 2011.
Year Number of cases Number of emergency cases Rate 2007 33 9576 3.45 per 1,000 2008 30 9131 3.26 per 1,000 2009 26 8707 2.99 per 1,000 2010 26 8209 3.17 per 1,000 2011 25 7883 3.17 per 1,000
### Table 3.
Number and rate of drug overdose for each year in the study
Most patients overdosed on one drug. Fifty-eight prescription and non-prescription medicines were included in the study. These medicines were categorised in terms of medical indication purpose. Seven drug categories were found in the study, and each one involved more than seven cases. Further, anti-coagulants and analgesics accounted for 35.3 per cent of drug overdose cases. These categories were investigated in terms of age groups. The findings show that 55 per cent of anti-coagulant overdose cases occurred in patients aged over 60 years, while 41 per cent of analgesic overdose cases occurred in patients aged 20–30 years. According to the findings, Warfarin accounted for the highest percentage of drug overdoses. Warfarin accounted for 85 per cent of overdoses in patients aged over 50 years, while two cases occurred in children and middle-aged people respectively. Further, people in lower and middle-income groups accounted for 85.7 per cent of anti-coagulant cases.
The results show that four patients from age (20.00–29.12 years), had a previous overdose, and this age group represented 50 per cent of patients associated with previous overdoses. Moreover, two patients overdosed on OTC medicines twice, and one patient overdosed twice on Warfarin. Two deaths occurred from drug overdoses: one death was a patient who overdosed on paracetamol twice, and the other was attributed to amphetamine. In addition, there were 18 overdose cases aged from 15 to 25 years. It found that analgesics and antipsychotics accounted for 38.8 and 22.2 per cent of the cases respectively. Cholesterol-lowering and diabetic medicines were involved in two cases and antihistamine and antiepileptic drugs were involved in one case each.
As the hospital belongs to the Ministry of Interior, it is important to identify the occupations that are more involved in drug overdoses. There were 30 interior personnel cases, and eleven positions represented all interior personnel drug overdose cases. The system of occupation in the Ministry of Interior has two major categories: officer and non-officer. Non-officer personnel presented at a higher rate in drug overdoses than non-officer personnel. In this study, five of 30 cases belong to officers and the rest belong to non-officers.
Position name Frequency Soldier 7 First soldier 5 Captain 4 First Sargent 3 Sargent 2 Staff-Sargent 2 Unknown 2 Colonel 1 Corporal 1 Chief-Sargent 1 Porter 1 Senior-Sargent 1
### Table 4.
Occupations of interior personnel cases and their frequency
## 5. Discussion
We found that females accounted for a higher percentage of drug overdose cases than males, which is similar to findings in previous studies.[6,22,103,104] In contrast, Alfawal’s study found that males accounted for 88 per cent of the cases.[7] Further, it found that (2.00–9.12 years) and (over 60 years) patients accounted for the highest percentage, at 22.9 per cent of cases. Previous studies associated the elderly with a higher percentage of drug overdoses, and this study had a similar result.[79,82] Further, the CDC found that the highest risk group of drug overdose among children was amongst those aged two years.[80] This research reached the same conclusion. Thus, most cases might have occurred unintentionally because previous studies demonstrated that children and the elderly are at a higher risk of unintentional overdose.[80]
The finding stated that 75.7 per cent of cases were referred to the inpatients admission department. According to the medical records department supervisor, this high percentage is not because most cases were severe; rather, many cases that were presented and discharged from the emergency department were missing and did not register to the medical files. There are two possible reasons for missing drug overdose cases. First, there is a higher load on emergency physicians; so many diagnostic forms are not fully completed. Second, many cases present to the emergency department have not been registered in the patients’ medical records, so some overdose cases may have been missed and not caught by the ICD-9-CM. This explains why there were only 140 drug overdose cases in the five-year period. A study conducted in the National Guard Hospital in Riyadh, which is considered larger than the Security Force Hospital, found nine drug overdose cases per month,[105] compared to this study, which found around three cases per month. This is further evidence that there might be missing cases.
Fifty-eight medicines were involved in the drug overdose cases, and the most common drug was Warfarin, which caused 29 overdose cases. This finding differs from previous studies conducted in Saudi Arabia, which found that OTC medicines accounted for a large percentage of drug overdoses. Moazzam’s and Aljahdali’s studies found that paracetamol accounted for 24.1 per cent of 170 drug overdose cases and 30 per cent of 79 cases, respectively.[21,22]Ahmed’s study found that mefenamic acid accounted for 20 per cent of 50 cases investigated.[6] Moreover, Malik’s study found that the most common drugs used were analgesics and non-steroid anti- inflammatory drugs.[104] AbuMadini’s study found that 80 per cent of cases were caused by paracetamol,[103] which was the second most common medicine in this study (12 cases).
According to the findings, children aged 2–9 years accounted for 22.4 per cent of drug overdose cases. This might indicate that many parents leave medicines unprotected from children, so children might administer excessive amounts of drugs accidentally. Education and awareness campaigns should be conducted to educate people about the risk of leaving medicines unprotected, as well as how to store their medicines correctly.[1,75,106,107] Further, leaving medicines unsecured from children can contribute to an increase in the rate of drug overdose.[78,108,109] According to the American Association of Poison Control Centers, in 2009, prescription and OTC drugs caused more than 30 per cent of children’s death in the US.[110]
Many policy and prevention measures can be implemented to protect children from drug overdoses, such as child-resistant packaging (CRP), product reformulations and heightened parental awareness. CRP reduced the drug overdose mortality rate of children by 45 per cent between 1974 and 1992.[22,80] Medication packaging will not protect children from overdose, and it becomes ineffective if the medication is not re-secured correctly.[109] Further, packaging has not proved to be effective, as young children have the dexterity to open these containers.[111] Some prevention programs have been conducted to educate parents about storing medicine in safe places. The Preventing Overdoses and Treatment Exposures Task Force (PROTECT) launched a program called ‘Up and Away’, which aims to educate parents about effectively storing medicines, and it emphasises the need to return medicines to a safe storage location immediately after every use to prevent children from reaching them.[109]
Other strategies might be helpful in preventing drug overdoses in children. For example, the use of adaptors on bottles of liquid medication so that the medication can be accessed only with a needleless syringe; parents should not allow children to drink medicine directly from the bottle; and using unit dose packaging might reduce the amount of accidental drug ingestion. These strategies are highly recommended for common medicines such as OTC drugs.[80]
As children account for a higher percentage of drug overdose cases, parents’ misunderstanding and miscalculation of doses can contribute to a higher percentage of overdoses. Contributing factors include limited literacy and numeracy, particularly in age indication. This problem is emphasised in terms of OTC medicines, as no instructions are provided directly by healthcare providers.[28,112] Applying simple language instructions and warning labels in the leaflets of medicines might be helpful in terms of calculating correct doses.[75] Further, healthcare providers should request that parents with low literacy levels use one product for all children in the family, which might help to prevent dose miscalculation.[73]
Another major factor contributing to drug overdoses in children is the availability of unused drugs in homes.[113] The solution for this problem is medication disposal. Campaigns for the disposal of medications have been used in many countries, which would help to reduce accidental drug overdoses in children, intentional drug abuse and the accumulation of drugs by elderly people, as well as protect the physical environment and eliminate waste in the healthcare system.[113,114] The government of Ireland launched a campaign called Dispose of Unused Medicines Properly (DUMP), which encouraged the public to return unused or expired medicines to community pharmacies. The project was launched in 2005, and 9,608 items were returned in the first year and 2,951 kilograms were returned in 2006. The most common medicine group returned was the nervous system class, which accounted for 26.3 per cent.[115]
A study conducted in Saudi Arabia in 2003 identified the issue of unused and expired medicines in Saudi dwellings. The study recruited 1,641 households in 22 cities. The study found that more than 80 per cent of Saudi families had more than five medicines, with an average of more than two medicines that were expired or unused. The most common drugs found in the participants’ houses were respiratory drugs (16.8 per cent), followed by central nervous system (CNS) agents (16.4 per cent) and antibiotics (14.3 per cent). Of the 2,050 CNS medications, OTC analgesics (including non-steroidal anti-inflammatory agents) constituted 49.9 per cent of the total (n = 1,023). Further, 51 per cent of all medicines found were not currently used and, of these, 40 per cent were expired. So medication wastage can provide greater opportunity to access prescription drugs in Saudi Arabia. The study recommended disposal medication campaigns to reduce the danger of available unused and expired medicines.[116]
According to the results a large number of medicines were involved in drug overdose, and thus might indicated that many patients have excessive amount of medicines in there dwelling. One reason might contribute to the excessive amount of medicines is drug seeking behaviour. Warfarin accounted for the highest percentage of drug overdoses. Eighty-five per cent of all Warfarin overdoses occurred in patients aged over 50 years, while two cases occurred in children and the middle-aged, respectively. This might indicate that the overdoses occurred unintentionally. One of major reasons for Warfarin overdoses is that it has a narrow therapeutic index; thus, administrating a larger dose would easily lead to overdoses.[117] Further, Warfarin is associated with complex pharmacology and inherent risk of outcome. As it is used continuously, maintaining the dose is critical to ensure safe and effective therapy.[117]
According to the results, females accounted for a higher percentage of drug overdoses. Several factors might contribute to this. First, family conflict was stated as a higher-risk factor of drug overdoses in women. Aljahdali’s study found that 80 per cent of the 79 overdose cases investigated were female, and 60 per cent had family conflicts.[22] Further, a study was conducted in the King Fahd Hospital of the University (KFHU) to investigate cases of deliberate self-harm presented to the emergency department of the hospital. The study recruited 362 cases, and the female to male ratio was 1.8:1. The study found that 71 per cent of cases were drug overdoses, of which 50.3 per cent were caused by family problems.[103] Moreover, a study was conducted in Saudi Arabia in KFHU to investigate non-fatal, deliberate self-harm cases. There were 55 cases investigated over nine months, and 80 per cent of them were female. The most common method used was self-poisoning (drug and chemical). The study found that family conflict was the main factor, contributing to 50.9 per cent of cases.[118]
The rate of drug overdose for each year of the study period was reduced from 3.45 to 3.17 per 1,000. However, the number of emergency admissions also reduced annually. This result contrasts with previous studies. For example, Moazzam found an increased rate of drug poisoning in the alQassim region in Saudi Arabia, from 6.6 per 100,000 in 1999 to 10.7 per 100,000.[21] Further, Malik found that the number of drug overdose cases presented in Asir Center Hospital increased from two cases in 1989 to 22 cases in 1993.[104] This indicates that there were perhaps more preventive and awareness programs in Saudi Arabia in the previous years.
There are several pharmacyosurveillance implications; one of them is collecting data regarding motivations and causes of drug overdose. According to the finding most of the cases occurred in elderly and children, so targeting these groups of people would help in reducing the rate of drug overdose cases.[119] Further, another implication would be the use of Electronic prescription. It is defined as a tool for prescribers to electronically prepare and send an accurate, error-free and understandable prescription directly to a pharmacy. Previous study found that electronic prescription system reduced medical errors by 55 per cent - from 10.7 to 4.9 per 1000 patient-days.[120] According to the results the rate of drug overdose cases in the emergency department decreased between 2007 and 2011, and this was attributed for using electronic prescription system in the hospital.
Drug related problems account for large amount of money in hospital cost. For example, in US a probability model in 2002 estimated that morbidity and mortality associated with DRPs account for $76.6 billion in hospital cost. Further, a study conducted in Saudi Arabia in 2008 found that the estimate cost of one day admission for drug related problem is 666$. So Implementing preventable measures such as pharmacosurveillance system would be a cost effective.[105]
Some policies might be implemented to reduce the risk of drug overdose cases. As Warfarin accounted for the highest percentage of drug overdoses, particularly in elderly people, further dose instruction should be given to elderly patients to ensure they have understood the instructions correctly.[85] Further, patients acquired Warfarin from hospital; thus, if the quantity of medicine dispensed is reduced, drug overdose cases might be prevented. In addition, children accounted for the highest percentage of drug overdose cases, so policymakers should implement awareness courses to educate parents to secure and protect medicines from children.[80] There was a wide range of medicines involved in drug overdose cases, so further dose instruction is needed. Moreover, patients must be educated regarding the dangers of overdosing.
## 6. Conclusion
Despite religious, cultural and legal deterrents, occasional cases of drug overdoses occur in the Saudi population.
The main limitations of this study mainly in relate to the quality of data available in patients’ medical records, as many files might not be fully documented, and some variables related to research are not found in the medical records (e.g. education level). Moreover, as the income level is identified based on the household occupation, a number of files did not document the household occupation; thus, some patients’ incomes were not available. The sample size is considered small, as there are few statistically significant associations between variables. Thus, the findings relating to associations between variables might not represent the actual validity of the associations between the independent and outcome variables. In addition, the data in this study was collected from a single institution, and the patients of drug overdoses have special characteristics that might not be similar to the general Saudi population. For example, all people treated in the hospital obtain medicines from the pharmacy without any charge. Further, One of the limitations of this study is that it does not state the reasons for drug overdoses, and it does not identify if overdoses occurred intentionally or accidentally.
Some significant findings were made, such as Warfarin causing 29 overdose cases, and patients aged over 50 accounting for 85 per cent of all Warfarin cases. This finding signifies a problem with Warfarin in elderly patients, and further research is needed to identify the major cause of this high percentage and to assist in implementing preventive measures to protect the elderly from the risk of overdosing. Further, children accounted for a high percentage of drug overdoses, and the study stated that 66.6 of anti-hypertensive overdoses were children. Thus, further research should be conducted to identify the reasons why children overdose so they can be protected from drug overdoses.
These findings could help the hospital to implement preventive strategies and policies. As many cases occur accidentally, education and awareness programs are required regarding dose instructions and the storage and disposal of medicines. Further, many patients keep excessive amounts of medicines in their dwellings, so reducing the amount of medicines provided to chronic patients would help to reduce drug overdose cases. Education of physicians on drug-seeking behaviour of patients is important. Further, special courses in dose instructions could be implemented for elderly patients, as well as programs that target parents regarding dose calculations for their children and the safe storage of medicines. In addition, clinical guidelines for overdose management need to be standardised, and the surveillance and recording of overdose information should be improved. Lastly, improved education is required for the public and for health workers in order to prevent drug interactions that might precipitate overdoses.
## More
© 2013 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution 3.0 License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
## How to cite and reference
### Cite this chapter Copy to clipboard
Naser Al-Jaser, M. Cli. Epi and Niyi Awofeso (May 15th 2013). Epidemiology of Patients Diagnosed with Prescription and Non-Prescription Drug Overdose at the Riyadh Security Forces Hospital Between January 2007 and December 2011, Current Topics in Public Health, Alfonso J. Rodriguez-Morales, IntechOpen, DOI: 10.5772/52879. Available from:
### Related Content
Next chapter
#### Contribution of Biomedical Research Ethics in Public Health Advances
By C.N. Fokunang, EA Tembe-Fokunang, M. Djuidje Ngounoue, P.C. Chi, J Ateudjieu, Awah Pascal, G. Magne, N.M. Ndje, O.M.T. Abena, D. Sprumont and Kaptue Lazare
#### Current Topics in Tropical Emerging Diseases and Travel Medicine
Edited by Alfonso J. Rodriguez-Morales
First chapter |
# Why do Solve, NSolve, and FindRoot all fail on this simple, solvable actuarial equation?
If we express conditional mortality as a vector of annual probabilities of death, like so
qx1={0.04772, 0.05854, 0.07519, 0.09659, 0.11762, 0.13904, 0.16124,
0.18363, 0.2041, 0.22319, 0.24276, 0.262, 0.27897, 0.29458, 0.31044,
0.32691, 0.34597, 0.36573, 0.38348, 0.39799, 0.40855, 0.41447,
0.41774, 0.42266, 0.43064, 0.43913, 0.44417, 0.44802, 0.45, 0.45};
we can compute the associated survival vector with
lx1=Drop[FoldList[#1 (1 + #2) &, 1, -qx1], -1]
and we can find the median life expectancy of this population with
Length[Select[lx1, # >= .5 &]]
We can combine these two equations into a single function that translates the conditional mortality curve directly into a median life expectancy. It would look like
medianLEfromQx[qx_]:=Length[Select[Drop[FoldList[#1 (1 + #2) &, 1, -qx], -1], # >= .5 &]]
And using the sample data above, we see that the median life expectancy for the population is 7 years. If we want to see the implications of reducing annual mortality by 10%, we can ask
medianLEfromQx[.9*qx1]
And we get an answer of 8 years. Fine.
Here lies the problem -- I can't get Mathematica to solve for that multiplier given a desired LE.
Solve[medianLEfromQx[x*qx1]==8,x]
doesn't work (returning an empty set)
NSolve[medianLEfromQx[x*qx1]==8,x]
doesn't work, also returning an empty set, and
FindRoot[medianLEfromQx[x*qx1]==8,{x,1}]
doesn't work either, with the error message that "The Function value {False} is not a list of numbers with dimensions {1} at {x} = {1.}"
What am I doing wrong here?
• Not checking the actual input to SOlve. Always check that it's what you think it is. – Daniel Lichtblau Mar 13 '14 at 21:51
First of all, check here and here. This is a common pitfall, and questions related to this are posted literally weekly, so I am going to let you review those articles.
In short, if you try evaluating medianLEfromQx[x qx1] with x having no value, you'll see that it returns a number. This expression evaluates inside FindRoot even before FindRoot gets a chance to substitute a value for x. So you would have to make medianLEfromQx not evaluate except for truly numerical vector arguments.
You can do this by changing its definition to look like:
Clear[medianLEfromQx]
medianLEfromQx[qx_ /; (VectorQ[qx, NumericQ])] := ...
Now medianLEfromQx[x qx1] won't evaluate unless x has a numerical value.
Next, Solve and NSolve won't work on numerical blackboxes, only FindRoot will. Solve only works with symbolic equations with exact coefficients. NSolve is designed for solving polynomial equations (or equations that can be reduced to a polynomial equation) numerically, thus it also needs to see the structure of an equation and won't work with a numerical black box.
So the only candidate here is FindRoot.
However FindRoot isn't very appropriate here either. The methods it can use all assume that the function they're working with is a "nice and smooth one". Your function always returns integers, so it has a "step structure". The default FindRoot method would try to approximate the derivative of the function and would of course fail: the derivative is zero everywhere.
You can use Brent's method, but this isn't ideal either: FindRoot[medianLEfromQx[x*qx1] == 8, {x, 0, 2}, Method -> "Brent"]
Instead I would just plot the function and visually check the range of x which satisfies this equation.
Plot[medianLEfromQx[x qx1] - 8, {x, 0, 10}]
• The plot will easily give you an x that satsfies the equation. If you need the full precise range of x, you can either write a bisection method, or extract it from Plot. I typed this up in a rush, let me know if you need more info. – Szabolcs Mar 13 '14 at 20:50
• Yes, thanks. I need to apply this in an automated way, so I implemented a brute-force-ish method that works. I wonder, if questions of this sort are posted weekly, if Wolfram might change the error messages produced by these functions to be more useful, and/or write about these issues more directly in their documentation. – Michael Stern Mar 13 '14 at 21:52
• @MichaelStern Well, they already have the "knowledge base" article I linked to. I'm not sure it's possible at all to fix this shortcoming, it's too deeply rooted in the language. I think everybody here fell into this trap at least once while using Mathematica. I certainly did. – Szabolcs Mar 13 '14 at 22:00
• @MichaelStern Also sorry if the answer sounded a bit rough, I had to type it up in a hurry and in one go. I recommend you try the Brent method if all you need is finding one single value that satisifies the equation (i.e. if it doesn't matter whether the number returned is 0.86 or 0.89 for as long as both satisfy the equation). – Szabolcs Mar 13 '14 at 22:11
• Brent method proved very useful. – Michael Stern Apr 15 '14 at 21:31
The way I understand your setup makes it seem like there's a simpler approach. The survival vector should be monotonic. So if you want the life expectancy to be, say, 8 years, then you want the 8th entry to be 1/2.
Clear[multiplier];
multiplier[le_Integer, qx_?(VectorQ[#, NumericQ] &)] :=
Block[{x},
First @ Sort @
Select[
x /. NSolve[Drop[FoldList[#1 (1 + #2) &, 1, - x * qx], -1][[le]] == 0.5, x, Reals],
0 < # &]
]
Example
multiplier[8, qx1]
(*
0.940737
*)
Since this is using approximate reals, there may be some boundary issues occasionally. You also might want to add some sanity checks. For instance,
Table[multiplier[x, qx1], {x, 2, 20}]
(*
{10.4778, 5.50098, 3.3954, 2.27372, 1.62347, 1.21458, 0.940737,
0.748985, 0.610999, 0.508783, 0.430586, 0.36941, 0.320942, 0.28194,
0.249964, 0.223317, 0.200685, 0.181273, 0.164589}
*)
The first few multipliers would make the entries in qx1 greater than 1. In those cases, the function multiplier should print an error message I suppose.
Update 29 Dec 2014
In a comment, the OP was interested in adapting the above method to an interpolated mortality curve. Here's a way.
I'm unfamiliar with the standard way of interpolating mortality, but linear or exponential seem likely candidates. So one of these two, with the multiplier built into the InterpolatingFunction is the way to set it up:
Interpolation[
Transpose[{Range@Length@qx1, Drop[FoldList[#1 (1 + #2) &, 1, -x*qx1], -1]}],
InterpolationOrder -> 1]
Exp @* Interpolation[
Transpose[{Range@Length@qx1, Log @ Drop[FoldList[#1 (1 + #2) &, 1, -x*qx1], -1]}],
InterpolationOrder -> 1]
(* Use Composition[Exp, Interpolation[<..>] in V9 or earlier *)
Now the set up I really want is that the interpolation should be a function of a vector of annual probabilities of death qx. So the definition I will use is
lxIF[qx_?(VectorQ[#1, NumericQ] &)] :=
lxIF[qx] =
Evaluate[
Exp @* Interpolation[
Transpose[{Range@Length@qx, Log @ Drop[FoldList[#1 (1 + #2) &, 1, -#*qx], -1]}],
InterpolationOrder -> 1]
] &;
lx[x_, qx_?(VectorQ[#1, NumericQ] &)] := lxIF[qx][x];
A few things need mentioning. First,I replaced the multiplier x by the Function argument #. Second, we will be using this function many times, so used memoization to cache the interpolation in lxIF[qx] the first time it is computed so that it will be reused instead of recomputed. Finally, the function call lxIF[qx][x] replaces the argument # in lxIF[qx] by x and returns an InterpolatingFunction that is a function of life expectancy.
To calculate the probability of surviving 8 years for a multiplier x = 0.9, use
lxIF[qx1][0.9][8]
(* 0.516127 *)
To find the multiplier for the median life expectancy to be 8.5 years, use
FindRoot[lxIF[qx1][x][8.5] == 0.5, {x, 1.}]
(* {x -> 0.833662} *)
A general use function can be constructed thus:
multiplier2[le_?NumericQ, qx_?(VectorQ[#1, NumericQ] &)] :=
Block[{x}, x /. FindRoot[lxIF[qx][x][le] == 0.5, {x, 1.}]]
Note that FindRoot works well here because interpolating functions have derivatives. Even though lxIF has a discontinuous derivative, it is strictly monotonic, which makes root-finding easy.
The mean life expectancy for a given multiplier x can be computed with
meanLE[x_?NumericQ, qx_?(VectorQ[#1, NumericQ] &)] :=
NIntegrate[lxIF[qx][x][le], {le, 1, Length[qx]}];
FindRoot works on it, too.
• @MichaelStern I based my answer on how I understood your description of the problem. If I've made a mistake, please let me know. – Michael E2 Mar 13 '14 at 23:50
• that's a perfectly good method for integer Median LEs, thank you. I'm trying to think of how to generalize it for non-integer LEs and Mean LEs. – Michael Stern Apr 15 '14 at 19:58
• @MichaelStern From the code in your post, the median LE is given by Length and therefore is an integer. How do you generalize it to non-integer LEs? – Michael E2 Dec 29 '14 at 6:49
• In practice, people often interpolate a monthly mortality curve and compute their results on that basis. It is also possible to interpolate a continuous curve. – Michael Stern Dec 29 '14 at 12:17
• @MichaelStern Thanks. I made an update, which I hope is along the lines of you described. One question: It seems to me that the domain of the continuous interpolation should be {0, Length[qx]} -- i.e. from 0 to 30 in the case qx1. But Interpolation will only go from the end points of the data, 1 to 30. Would interpolating Transpose[{Range[0, Length@qx1], FoldList[#1 (1 + #2) &, 1, -x*qx1]}], i.e. not dropping the last survival probability and adjusting the domain to 0 to 30, be correct? (If I'm way off beam, I'll just delete this answer.) – Michael E2 Dec 29 '14 at 17:46 |
American Institute of Mathematical Sciences
• Previous Article
Boundary element approach for the slow viscous migration of spherical bubbles
• DCDS-B Home
• This Issue
• Next Article
Dirichlet - transmission problems for general Brinkman operators on Lipschitz and $C^1$ domains in Riemannian manifolds
June 2011, 15(4): 1019-1044. doi: 10.3934/dcdsb.2011.15.1019
Boundary integral equation approach for stokes slip flow in rotating mixers
1 Grupo de Energía y Termodinámica, Escuela de Ingenierías, Universidad Pontificia Bolivariana, Medellín, Circular 1 No. 73-34, Colombia, Colombia 2 Division of Energy and Sustainability, The University of Nottingham, University Park, Nottingham, NG7 2RD, United Kingdom
Received March 2010 Revised June 2010 Published March 2011
In order to employ continuum models in the analysis of the flow behaviour of a viscous Newtonian fluid at micro scale devices, it is necessary to consider at the wall surfaces appropriate slip boundary conditions instead of the classical non-slip condition. To account for the slip condition at the nano-scale, we used the Navier's type boundary condition that relates the tangential fluid velocity at the boundaries to the tangential shear rate. In this work a boundary integral equation formulation for Stokes slip flow, based on the normal and tangential projection of the Green's integral representational formulae for the Stokes velocity field, which directly incorporates into the integral equations the local tangential shear rate at the wall surfaces, is presented. This formulation is used to numerically simulate concentric and eccentric rotating Couette mixers and a Single rotor mixer, including the effect of thermal creep in cases of rarefied gases. The numerical results obtained for the Couette mixers, concentric and eccentric, are validated again the corresponding analytical solutions, showing excellent agreements.
Citation: César Nieto, Mauricio Giraldo, Henry Power. Boundary integral equation approach for stokes slip flow in rotating mixers. Discrete and Continuous Dynamical Systems - B, 2011, 15 (4) : 1019-1044. doi: 10.3934/dcdsb.2011.15.1019
References:
[1] A. Frangi, G. Spinola and B. Vigna, On the evaluation of damping in MEMS in the slipflow regime, Int. J. Numer. Meth. Engng., 68 (2006), 1031-1051. doi: doi:10.1002/nme.1749. [2] A. J. Burton and G. F. Miller, The application of integral methods for the numerical solution of boundary value problems, Proc. R. Soc. A, 232 (2008), 201-210. [3] C. Neto, D. R. Evans, E. Bonaccurso, H. Butt and V. Craig, Boundary slip in Newtonian liquids: A review of experimental studies, Rep. Progr. Phys., 68 (2005), 2859-2897. doi: doi:10.1088/0034-4885/68/12/R05. [4] D. Lockerby, J. M. Reese, D. R. Emerson and R. W. Barber, Velocity boundary condition at solid walls in rarefied gas calculations, Physical Review E, 70 (2004), 017303-1 - 017303-9. [5] D. C. Tretheway and C. D. Meinhart, A generating mechanism for apparent fluid slip in hydrophobic microchannels, Phys. Fluids, 16 (2004), 1509-1515. doi: doi:10.1063/1.1669400. [6] E. A. Mansur, Y. Mingxing, W. Yundong and D. Youyuan, A state-of-the-art review of mixing in microfluidic mixers, Chin. J. Chem. Eng., 16 (2008), 503-516. doi: doi:10.1016/S1004-9541(08)60114-7. [7] G. Hu and D. Li, Multiscale phenomena in microfluidics and nanofluidics, Chemical Engineering Science, 62 (2007), 3443-3454. doi: doi:10.1016/j.ces.2006.11.058. [8] G. Karniadakis, A. Beskok and N. Aluru, "Microflows and Nanoflows: Fundamentals and Simulation," 1st edition, [9] H. Chen, J. Jin, P. Zhang and P. Lu, Multi-variable non-singular BEM for 2-D potential problems, Tsinghua Science and Technology, 10 (2005), 43-50. doi: doi:10.1016/S1007-0214(05)70007-9. [10] H. Luo and C. Pozrikidis, Effect of surface slip on Stokes flow past a spherical particle in infinite fluid and near a plane wall, J. Engrg. Math, 62 (2008), 1-21. doi: doi:10.1007/s10665-007-9170-6. [11] H. Power and L. C. Wrobel, "Boundary Integral Methods in Fluid Mechanics," 1st edition, [12] I. Ashino and K. Yoshida, Slow motion between eccentric rotating cylinders, Bull. JSME, 18 (1975), 280-285. [13] I. G. Currie, "Fundamental Mechanics of Fluids," 3rd edition, [14] Jian Ding and Wenjing Ye, A fast integral approach for drag force calculation due to oscillatory slip stokes flows, Int. J. Numer. Meth. Engng., 60 (2004), 1535-1567. doi: doi:10.1002/nme.1013. [15] J. Maureau, M. C. Sharatchandra, M. Sen and M. Gad-el-Hak, Flow and load characteristics of microbearings with slip, J. Micromech. Microeng., 7 (1997), 55-64. doi: doi:10.1088/0960-1317/7/2/003. [16] J. Telles, A self-adaptative coordinate transformation for efficient numerical evaluation of general boundary element integrals, Internat. J. Numer. Methods Engrg., 24 (1987), 959-973. doi: doi:10.1002/nme.1620240509. [17] K. F. Lei and W. J. Li, A novel in-plane microfluidic mixer using vortex pumps for fluidic discretization, JALA, 13 (2008), 227-236. [18] Long-Sheng Kuo and Ping-Hei Chen, A unified approach for nonslip and slip boundary conditions in the lattice Boltzmann method, Comput. & Fluids, 38 (2009), 883-887. doi: doi:10.1016/j.compfluid.2008.09.008. [19] M. Gad-el-Hak, "MEMS: Introduction and Fundamentals," 2nd edition, [20] M. T. Matthews and J. M. Hill, Newtonian flow with nonlinear Navier boundary condition, Acta Mechanica, 191 (2007), 195-217. doi: doi:10.1007/s00707-007-0454-8. [21] N. Nguyen and S. Wereley, "Fundamentals and Applications of Microfluidics," 2nd edition, [22] O. Aydin, M, Avci, Heat and fluid flow characteristics of gases in micropipes, International Journal of Heat and Mass Transfer, 49 (2006), 1723-1730. doi: doi:10.1016/j.ijheatmasstransfer.2005.10.020. [23] O. I. Vinogradova, Slippage of water over hydrophobic surfaces, Int. Journal of Miner. Process, 56 (1999), 31-60. doi: doi:10.1016/S0301-7516(98)00041-6. [24] P. A. Thompson and S. M. Troian, A general boundary condition for liquid flow at solid surfaces, Nature, 389 (1997), 360-362. doi: doi:10.1038/39475. [25] R. Courant and D. Hilbert, "Methods of Mathematical Physics," 3rd edition, [26] R. W. Barber, Y. Sun, X. J. Gu and D. R. Emerson, Isothermal slip flow over curved surfaces, Vacuum, 76 (2004), 73-81. doi: doi:10.1016/j.vacuum.2004.05.012. [27] S. G. Mikhlin, "Multidimensional Singular Integrals and Integral Equations," 1st edition, [28] S. Yuhong, R. W. Barber and D. R. Emerson, Inverted velocity profiles in rarefied cylindrical Couette gas flow and the impact of the accommodation coefficient, Phys. Fluids, 17 (2005), 047102-1-047102-7. doi: doi:10.1063/1.1868034. [29] T. Glatzel, C. Littersta, C. Cupelli, T. Lindemann, C. Moosmann, R. Niekrawietz, W. Streule, R. Zengerle and P. Koltay, Computational fluid dynamics (CFD) software tools for microfluidic applications ?A case study, Comput. & Fluids, 37 (2008), 218-235. doi: doi:10.1016/j.compfluid.2007.07.014. [30] V. Hessel, H. Lwe and F. Schnfeld, Micromixers: A review on passive and active mixing principles, Chemical Engineering Science, 60 (2005), 2479-2501. doi: doi:10.1016/j.ces.2004.11.033. [31] W. F. Florez and H. Power, Multi-domain mass conservative dual reciprocity method for the solution of the non-Newtonian Stokes equations, Appl. Math. Modelling, 26 (2002), 397-419. doi: doi:10.1016/S0307-904X(01)00044-0. [32] Xiaolin Li and Jialin Zhu, Meshless Galerkin analysis of Stokes slip flow with boundary integral equations, Int. J. Numer. Meth. Fluids, 61 (2009), 1201-1226. doi: doi:10.1002/fld.1991. [33] Xiaojin Wei and Yogendra Joshi, Experimental and numerical study of sidewall profile effects on flow and heat transfer inside microchannels, International Journal of Heat and Mass Transfer, 50 (2007), 4640-4651. doi: doi:10.1016/j.ijheatmasstransfer.2007.03.020.
show all references
References:
[1] A. Frangi, G. Spinola and B. Vigna, On the evaluation of damping in MEMS in the slipflow regime, Int. J. Numer. Meth. Engng., 68 (2006), 1031-1051. doi: doi:10.1002/nme.1749. [2] A. J. Burton and G. F. Miller, The application of integral methods for the numerical solution of boundary value problems, Proc. R. Soc. A, 232 (2008), 201-210. [3] C. Neto, D. R. Evans, E. Bonaccurso, H. Butt and V. Craig, Boundary slip in Newtonian liquids: A review of experimental studies, Rep. Progr. Phys., 68 (2005), 2859-2897. doi: doi:10.1088/0034-4885/68/12/R05. [4] D. Lockerby, J. M. Reese, D. R. Emerson and R. W. Barber, Velocity boundary condition at solid walls in rarefied gas calculations, Physical Review E, 70 (2004), 017303-1 - 017303-9. [5] D. C. Tretheway and C. D. Meinhart, A generating mechanism for apparent fluid slip in hydrophobic microchannels, Phys. Fluids, 16 (2004), 1509-1515. doi: doi:10.1063/1.1669400. [6] E. A. Mansur, Y. Mingxing, W. Yundong and D. Youyuan, A state-of-the-art review of mixing in microfluidic mixers, Chin. J. Chem. Eng., 16 (2008), 503-516. doi: doi:10.1016/S1004-9541(08)60114-7. [7] G. Hu and D. Li, Multiscale phenomena in microfluidics and nanofluidics, Chemical Engineering Science, 62 (2007), 3443-3454. doi: doi:10.1016/j.ces.2006.11.058. [8] G. Karniadakis, A. Beskok and N. Aluru, "Microflows and Nanoflows: Fundamentals and Simulation," 1st edition, [9] H. Chen, J. Jin, P. Zhang and P. Lu, Multi-variable non-singular BEM for 2-D potential problems, Tsinghua Science and Technology, 10 (2005), 43-50. doi: doi:10.1016/S1007-0214(05)70007-9. [10] H. Luo and C. Pozrikidis, Effect of surface slip on Stokes flow past a spherical particle in infinite fluid and near a plane wall, J. Engrg. Math, 62 (2008), 1-21. doi: doi:10.1007/s10665-007-9170-6. [11] H. Power and L. C. Wrobel, "Boundary Integral Methods in Fluid Mechanics," 1st edition, [12] I. Ashino and K. Yoshida, Slow motion between eccentric rotating cylinders, Bull. JSME, 18 (1975), 280-285. [13] I. G. Currie, "Fundamental Mechanics of Fluids," 3rd edition, [14] Jian Ding and Wenjing Ye, A fast integral approach for drag force calculation due to oscillatory slip stokes flows, Int. J. Numer. Meth. Engng., 60 (2004), 1535-1567. doi: doi:10.1002/nme.1013. [15] J. Maureau, M. C. Sharatchandra, M. Sen and M. Gad-el-Hak, Flow and load characteristics of microbearings with slip, J. Micromech. Microeng., 7 (1997), 55-64. doi: doi:10.1088/0960-1317/7/2/003. [16] J. Telles, A self-adaptative coordinate transformation for efficient numerical evaluation of general boundary element integrals, Internat. J. Numer. Methods Engrg., 24 (1987), 959-973. doi: doi:10.1002/nme.1620240509. [17] K. F. Lei and W. J. Li, A novel in-plane microfluidic mixer using vortex pumps for fluidic discretization, JALA, 13 (2008), 227-236. [18] Long-Sheng Kuo and Ping-Hei Chen, A unified approach for nonslip and slip boundary conditions in the lattice Boltzmann method, Comput. & Fluids, 38 (2009), 883-887. doi: doi:10.1016/j.compfluid.2008.09.008. [19] M. Gad-el-Hak, "MEMS: Introduction and Fundamentals," 2nd edition, [20] M. T. Matthews and J. M. Hill, Newtonian flow with nonlinear Navier boundary condition, Acta Mechanica, 191 (2007), 195-217. doi: doi:10.1007/s00707-007-0454-8. [21] N. Nguyen and S. Wereley, "Fundamentals and Applications of Microfluidics," 2nd edition, [22] O. Aydin, M, Avci, Heat and fluid flow characteristics of gases in micropipes, International Journal of Heat and Mass Transfer, 49 (2006), 1723-1730. doi: doi:10.1016/j.ijheatmasstransfer.2005.10.020. [23] O. I. Vinogradova, Slippage of water over hydrophobic surfaces, Int. Journal of Miner. Process, 56 (1999), 31-60. doi: doi:10.1016/S0301-7516(98)00041-6. [24] P. A. Thompson and S. M. Troian, A general boundary condition for liquid flow at solid surfaces, Nature, 389 (1997), 360-362. doi: doi:10.1038/39475. [25] R. Courant and D. Hilbert, "Methods of Mathematical Physics," 3rd edition, [26] R. W. Barber, Y. Sun, X. J. Gu and D. R. Emerson, Isothermal slip flow over curved surfaces, Vacuum, 76 (2004), 73-81. doi: doi:10.1016/j.vacuum.2004.05.012. [27] S. G. Mikhlin, "Multidimensional Singular Integrals and Integral Equations," 1st edition, [28] S. Yuhong, R. W. Barber and D. R. Emerson, Inverted velocity profiles in rarefied cylindrical Couette gas flow and the impact of the accommodation coefficient, Phys. Fluids, 17 (2005), 047102-1-047102-7. doi: doi:10.1063/1.1868034. [29] T. Glatzel, C. Littersta, C. Cupelli, T. Lindemann, C. Moosmann, R. Niekrawietz, W. Streule, R. Zengerle and P. Koltay, Computational fluid dynamics (CFD) software tools for microfluidic applications ?A case study, Comput. & Fluids, 37 (2008), 218-235. doi: doi:10.1016/j.compfluid.2007.07.014. [30] V. Hessel, H. Lwe and F. Schnfeld, Micromixers: A review on passive and active mixing principles, Chemical Engineering Science, 60 (2005), 2479-2501. doi: doi:10.1016/j.ces.2004.11.033. [31] W. F. Florez and H. Power, Multi-domain mass conservative dual reciprocity method for the solution of the non-Newtonian Stokes equations, Appl. Math. Modelling, 26 (2002), 397-419. doi: doi:10.1016/S0307-904X(01)00044-0. [32] Xiaolin Li and Jialin Zhu, Meshless Galerkin analysis of Stokes slip flow with boundary integral equations, Int. J. Numer. Meth. Fluids, 61 (2009), 1201-1226. doi: doi:10.1002/fld.1991. [33] Xiaojin Wei and Yogendra Joshi, Experimental and numerical study of sidewall profile effects on flow and heat transfer inside microchannels, International Journal of Heat and Mass Transfer, 50 (2007), 4640-4651. doi: doi:10.1016/j.ijheatmasstransfer.2007.03.020.
[1] Alexander Zlotnik, Ilya Zlotnik. Finite element method with discrete transparent boundary conditions for the time-dependent 1D Schrödinger equation. Kinetic and Related Models, 2012, 5 (3) : 639-667. doi: 10.3934/krm.2012.5.639 [2] W. G. Litvinov. Problem on stationary flow of electrorheological fluids at the generalized conditions of slip on the boundary. Communications on Pure and Applied Analysis, 2007, 6 (1) : 247-277. doi: 10.3934/cpaa.2007.6.247 [3] Jaroslav Haslinger, Raino A. E. Mäkinen, Jan Stebel. Shape optimization for Stokes problem with threshold slip boundary conditions. Discrete and Continuous Dynamical Systems - S, 2017, 10 (6) : 1281-1301. doi: 10.3934/dcdss.2017069 [4] Chun Liu, Jie Shen. On liquid crystal flows with free-slip boundary conditions. Discrete and Continuous Dynamical Systems, 2001, 7 (2) : 307-318. doi: 10.3934/dcds.2001.7.307 [5] Boris Muha, Zvonimir Tutek. Note on evolutionary free piston problem for Stokes equations with slip boundary conditions. Communications on Pure and Applied Analysis, 2014, 13 (4) : 1629-1639. doi: 10.3934/cpaa.2014.13.1629 [6] Xin Liu. Compressible viscous flows in a symmetric domain with complete slip boundary: The nonlinear stability of uniformly rotating states with small angular velocities. Communications on Pure and Applied Analysis, 2019, 18 (2) : 751-794. doi: 10.3934/cpaa.2019037 [7] Yosra Boukari, Houssem Haddar. The factorization method applied to cracks with impedance boundary conditions. Inverse Problems and Imaging, 2013, 7 (4) : 1123-1138. doi: 10.3934/ipi.2013.7.1123 [8] Maxim A. Olshanskii, Leo G. Rebholz, Abner J. Salgado. On well-posedness of a velocity-vorticity formulation of the stationary Navier-Stokes equations with no-slip boundary conditions. Discrete and Continuous Dynamical Systems, 2018, 38 (7) : 3459-3477. doi: 10.3934/dcds.2018148 [9] María Anguiano, Francisco Javier Suárez-Grau. Newtonian fluid flow in a thin porous medium with non-homogeneous slip boundary conditions. Networks and Heterogeneous Media, 2019, 14 (2) : 289-316. doi: 10.3934/nhm.2019012 [10] Donatella Donatelli, Eduard Feireisl, Antonín Novotný. On incompressible limits for the Navier-Stokes system on unbounded domains under slip boundary conditions. Discrete and Continuous Dynamical Systems - B, 2010, 13 (4) : 783-798. doi: 10.3934/dcdsb.2010.13.783 [11] Hugo Beirão da Veiga. A challenging open problem: The inviscid limit under slip-type boundary conditions.. Discrete and Continuous Dynamical Systems - S, 2010, 3 (2) : 231-236. doi: 10.3934/dcdss.2010.3.231 [12] Quanrong Li, Shijin Ding. Global well-posedness of the Navier-Stokes equations with Navier-slip boundary conditions in a strip domain. Communications on Pure and Applied Analysis, 2021, 20 (10) : 3561-3581. doi: 10.3934/cpaa.2021121 [13] Irena Lasiecka, To Fu Ma, Rodrigo Nunes Monteiro. Long-time dynamics of vectorial von Karman system with nonlinear thermal effects and free boundary conditions. Discrete and Continuous Dynamical Systems - B, 2018, 23 (3) : 1037-1072. doi: 10.3934/dcdsb.2018141 [14] Qianqian Hou, Tai-Chia Lin, Zhi-An Wang. On a singularly perturbed semi-linear problem with Robin boundary conditions. Discrete and Continuous Dynamical Systems - B, 2021, 26 (1) : 401-414. doi: 10.3934/dcdsb.2020083 [15] Mahamadi Warma. Semi linear parabolic equations with nonlinear general Wentzell boundary conditions. Discrete and Continuous Dynamical Systems, 2013, 33 (11&12) : 5493-5506. doi: 10.3934/dcds.2013.33.5493 [16] Qigui Yang, Qiaomin Xiang. Chaotic oscillations of linear hyperbolic PDE with variable coefficients and implicit boundary conditions. Discrete and Continuous Dynamical Systems - S, 2021, 14 (9) : 3267-3284. doi: 10.3934/dcdss.2020335 [17] María J. Rivera, Juan A. López Molina, Macarena Trujillo, Enrique J. Berjano. Theoretical modeling of RF ablation with internally cooled electrodes: Comparative study of different thermal boundary conditions at the electrode-tissue interface. Mathematical Biosciences & Engineering, 2009, 6 (3) : 611-627. doi: 10.3934/mbe.2009.6.611 [18] Frederic Rousset. The residual boundary conditions coming from the real vanishing viscosity method. Discrete and Continuous Dynamical Systems, 2002, 8 (3) : 605-625. doi: 10.3934/dcds.2002.8.606 [19] Daniele Boffi, Lucia Gastaldi. Discrete models for fluid-structure interactions: The finite element Immersed Boundary Method. Discrete and Continuous Dynamical Systems - S, 2016, 9 (1) : 89-107. doi: 10.3934/dcdss.2016.9.89 [20] Kersten Schmidt, Ralf Hiptmair. Asymptotic boundary element methods for thin conducting sheets. Discrete and Continuous Dynamical Systems - S, 2015, 8 (3) : 619-647. doi: 10.3934/dcdss.2015.8.619
2021 Impact Factor: 1.497 |
# Evaluate improper integral $(\cos(2x)-1)/x^2$
Consider the following improper integral:
$$\int_0^\infty \frac{\cos{2x}-1}{x^2}\;dx$$
I would like to evaluate it via contour integration (the path is a semicircle in the upper plane), but i have some problems: first, the only singularity would be $z=0$, but it is only an apparent singularity so the residue is $0$. There are no other singularity of interest, so the integral should be zero... But it can't be zero, so?
-
The integral of this function along any closed path is $0$, for the reason you mentioned. What would make you think it can't be $0$? – Michael Hardy Feb 18 '12 at 21:32
$\cos(2x) - 1 = -2\sin^2(x)$, thus $\int_0^\infty \frac{\cos(2x)-1}{x^2} \mathrm{d} x = - 2 \int_0^\infty \left(\frac{\sin(x)}{x}\right)^2 \mathrm{d} x = -\pi$. – Sasha Feb 18 '12 at 22:00
Ok, it's correct, but the last integral is just the same... I feel uneasy because, reasoning in terms of contour integration, I can't solve either the integral above nor the one you use (both have an apparent singularity in $z=0$). So, again, shouldn't it be zero? – quark1245 Feb 18 '12 at 22:13
Write down your contour argument more carefully. – GEdgar Feb 18 '12 at 22:20
Strategy: let $f(z)=(\cos(2z)-1)/z^2$; our path in the complex plane is the semicircle of radius $r$ in the upper plane, oriented positively, plus the real interval from $-r$ to $-\epsilon$, the semicircle of radius $\epsilon$ (just to avoid the singularity in $z=0$), plus the real interval from $\epsilon$ to $r$. In the limit $\epsilon\to 0$ and $R\to\infty$, the integral along the semicircle of radius $r$ is zero (consider the absolute value of the integral), the integral along the semicircle of radius $\epsilon$ is zero (the singularity is apparent, so the residue is zero).. to be continued – quark1245 Feb 18 '12 at 22:28
$$\int_0^\infty\frac{\cos 2\,x-1}{x^2}\,dx=\frac12\,\int_{-\infty}^\infty\frac{\cos 2\,x-1}{x^2}\,dx=\frac12\,\int_{-\infty}^\infty\frac{\Re(e^{2ix}-1)}{x^2}\,dx.$$ The key is the choice of function to integrate along a path. The function $$f(z)=\frac{e^{2iz}-1}{z^2}=\frac{2\,i}{z}-2-\frac{4\,i}{3}\,z+\cdots$$ has a simple pole at $z=0$ with residue $2\,i$. Take $R>0$ large and $\epsilon>0$ small. Integrate along a path formed by the positively oriented semicircle of radius $R$ in the upper half plane ($C_R$), the interval $[-R,-\epsilon]$ ($C_\epsilon$), the semicircle of radius $\epsilon$ negatively oriented and the interval $[\epsilon,R]$ and take the limit as $R\to\infty$ and $\epsilon\to0$. The integral along the path is zero, $\lim_{R\to\infty}\int_{C_R}f(z)\,dz=0$, but $\lim_{R\to\infty}\int_{C_\epsilon}f(z)\,dz=?$.
Now we are rescued by the half residue theorem that states that the last integral is $\pi i$ times the residue in zero, which is $2i$. Change the sign because the semicircle is oriented negatively. So, the integral is $-2\pi$. Done. Yet, can you state more precisely where my previous argument goes wrong? – quark1245 Feb 19 '12 at 9:50
$$\lim_{R\to\infty}\int_{C_R}\frac{\cos 2\,z-1}{z^2}\,dz=\ne0.$$ – Julián Aguirre Feb 19 '12 at 18:05
It is much easier to use Laplace Transform to calculate this improper integral. Recall that if $F(s)$ is the Laplace Transform of $f(x)$, then $$\mathcal{L}\big\{\frac{f(x)}{x}\big\}=\int_s^\infty F(u)du.$$ Let $f(x)=\cos 2x-1$; then $F(s)=\frac{s}{s^2+4}-\frac{1}{s}$. Thus \begin{eqnarray*} \mathcal{L}\big\{\frac{\cos 2x-1}{x}\big\}&=&\int_s^\infty\left(\frac{u}{u^2+4}-\frac{1}{u}\right)du\\ &=&\ln s-\frac{1}{2}\ln(s^2+4). \end{eqnarray*} Therefore \begin{eqnarray*} \mathcal{L}\big\{\frac{\cos 2x-1}{x^2}\big\}&=&\int_s^\infty\left(\ln u-\frac{1}{2}\ln(u^2+4)\right)du\\ &=&-\pi+2\arctan\frac{s}{2}-s\ln s+\frac{1}{2}\ln(s^2+4). \end{eqnarray*} So $$\int_0^\infty\frac{\cos 2x-1}{x^2}dx=\lim_{s\to o^+}\left(-\pi+2\arctan\frac{s}{2}-s\ln s+\frac{1}{2}\ln(s^2+4)\right)=-\pi.$$ |
# Contact of a spherical probe with a stretched rubber substrate
In a recently published paper, we report on a theoretical and experimental investigation of the normal contact of stretched neo-Hookean substrates with rigid spherical probes. Starting from a published formulation of surface Green’s function for incremental displacements on a pre-stretched, neo-Hookean, substrate (L.H. Lee \textitJ. Mech. Phys. Sol. \textbf56 (2008) 2957-2971), a model is derived for both adhesive and non-adhesive contacts. The shape of the elliptical contact area together with the contact load and the contact stiffness are predicted as a function of the in-plane stretch ratios $\lambda_x$ and $\lambda_y$ of the substrate. The validity of this model is assessed by contact experiments carried out using an uniaxally stretched silicone rubber. For stretch ratio below about 1.25, a good agreement is observed between theory and experiments. Above this threshold, some deviations from the theoretical predictions are induced as a result of the departure of the mechanical response of the silicone rubber from the neo-Hokeean description embedded in the model.
## À lire aussi...
### One-step mixing preparation of pH and salt stimuli-responsive biocompatible W/O/W emulsions
Fine tuning of pH and ionic strength determines which types of emulsion will form among oil in water, water in oil and W/O/W multiple emulsions. (...)
> Lire la suite...
### Durability of cement pastes exposed to external sulfate attack and leaching : Physical and chemical aspects
The unfortunate recent collapse of the Morandi bridge in Genoa has reminded us of the importance of the degradation of concrete and cement with (...)
> Lire la suite...
## Informations Pratiques
Sciences et Ingénierie de la Matière Molle - UMR 7615
10 rue Vauquelin
75231 PARIS CEDEX 05
• Directeur : E. Barthel
• Directeur adjoint : J.B. d’Espinose
• Directrice adjointe : G. Ducouret
• Pôle gestion : F. Decuq, M.-T. Mendy et M. Hirano-Courcot
• Communication : A. Hakopian et M. Ciccotti
• Systèmes d’information : A. Hakopian
• Assistant prévention : F. Martin
Comment venir ? |
# Open sets having an empty intersection but the intersection of their closure is not empty
Suppose $$V_{n}$$ is a decreasing sequence of (bounded) open sets in $$\mathbb{R}^{m}$$ with $$m\geq1$$. Suppose the intersection of all $$V_{n}$$ is empty, and let $$F$$ be the intersection of the closures of $$V_{n}$$. Can we say that there exists $$N$$ such that every $$x$$ in $$F$$ belongs to the boundary of $$V_{n}$$, for $$n\geq N$$?
(This question is suggested by setting $$V_{n}=(0,1/n)$$)
• I think when $\bigcap_n V_n =\emptyset$ that $\bigcap_n \overline{V_n}$ will always have empty interior, so the supposition is superfluous. Jul 13, 2019 at 22:19
• You are right. I fixed it. Jul 14, 2019 at 2:26
No. For instance, let $$V_n$$ be the union of a small open interval around $$1/m$$ for each $$m>n$$ and a small open interval with left endpoint $$1/m$$ for each $$m\leq n$$, the intervals being small enough to not overlap and shrinking to $$0$$ as $$n\to\infty$$. Then $$F=\{0\}\cup\{1/n:n\in\mathbb{Z}_+\}$$ but $$1/m$$ is only in the boundary of $$V_n$$ if $$n\geq m$$.
• $V_n$ is not a single interval, it is a union of intervals, one for each $m$. Jul 13, 2019 at 22:28 |
## How does the の work in 「日本人の知らない日本語」?
41
32
I've read that 日本人の知らない日本語 translates to: "Japanese (language) that Japanese (people) don't know". But I don't understand how or what the の does in that sentence. If I'm not mistaken 知らない日本語 could mean "Japanese language that (x) don't know" or "even unknown Japanese". But I don't get how the 日本人の fits into the translation.
46
In your example, 日本人の知らない is a relative clause, equivalent in meaning to 日本人が知らない. This clause as a whole modifies 日本語, so it means the Japanese that Japanese people don't know.
In relative clauses, the subject particle が can be replaced with の:
1. ジョン買った本
2. ジョン買った本
The book John bought
This is true in double-subject constructions as well:
1. ジョン高い理由
2. ジョン高い理由
3. ジョン高い理由
4. ジョン高い理由
The reason John is tall
But you can't replace が with の if there's a direct object marked with を:
1. ジョン買った店
2. *ジョン買った店 (ungrammatical)
The store where John bought the book
Interestingly enough, が can be used in 文語 Japanese where in modern, oral Japanese only の is usually acceptable (我々が心), as is obvious in a bunch of place names (written ヶ -- e.g. 関ケ原, 霞ヶ関 etc). Maybe が and の where more broadly interchangeable in different times / regions, but only clearly remained so in the case of relative clauses in modern / standard Japanese ? – desseim – 2019-08-31T17:34:49.360
6Maybe I am beating the dead horse, but ジョンの本を買った店 can mean "The store someone bought the book written by John." – eltonjohn – 2015-06-26T13:06:31.157
7When I say "ungrammatical", to be more precise I really mean "ungrammatical with the intended interpretation". That is, the process of nominative-genitive conversion is ungrammatical here due to the transitivity restriction, even though there may be an alternative source for the example in question, as you point out. Thank you for your comment! – snailcar – 2015-06-26T20:52:12.030
2In the last two sentences using を, they both strike me as grammatical, but the meaning shifts -- sentence 2 parses out to "the store that bought John's book". – Eiríkr Útlendi – 2015-11-13T07:14:51.650
1Please see my comment above :-) – snailcar – 2015-11-13T10:39:22.990
18
It's just standard GA-NO conversion.
[日本人が知らない]日本語
'Japanese that [Japanese don't know]'
8In more precise terms: の can act like a subject (nominative) particle in descriptive (attirbutive/relative) clauses. – ithisa – 2013-09-14T23:26:43.247 |
### Archived Sunday 16h February 2014 - THE IRISH SESSION - at The Bull, Tanners Street
Do you remember 20 years ago . . . . . in a pub opposite the Guildhall drinking beer whilst a bunch of people sang, squeezed their squeeze boxes, played their fiddles, banged on their drum things . . . and we all sang along . . . . (even if we said we didn't like folk-type music)
Well . . . It's back! Some of the same people - same concept - same headache the next morning - it's 1992 all over again! But now at The Bull
If you're too young to have lived the dream first time around . . . . . Now's your chance! 8pm |
### Home > CALC > Chapter 10 > Lesson 10.3.1 > Problem10-118
10-118.
Rewrite the integrand in terms of sine and cosine and use the Pythagorean Identity.
$=\int\frac{(1-\cos^2(x))\sin(x)}{\cos^7(x)}dx$
Let u = cos(x).
Use integration by parts: Let f = tan–1(x) and dg = x dx
$\int x\tan^{-1}(x)dx=\frac{1}{2}x^2\tan^{-1}(x)-\frac{1}{2}\int \frac{x^2}{1+x^2}dx$
Use polynomial division:
$=\frac{1}{2}x^2\tan^{-1}(x)-\frac{1}{2}\int \Big(1-\frac{1}{1+x^2}\Big)dx$
Use substitution: Let u = ln(x)
Use substitution:
$\text{Let }u=\sqrt{x}$
$df=\frac{1}{1+x^2}\text{, }g=\frac{1}{2}x^2$ |
# Einstein notation and the permutation symbol
1. Sep 14, 2014
### rcummings89
1. The problem statement, all variables and given/known data
This is my first exposure to Einstein notation and I'm not sure if I'm understanding it entirely. Also I added this class after my instructor had already lectured about the topic and largely had to teach myself, so I ask for your patience in advance...
The question is:
Evaluate the following expression: εijkaiaj
2. Relevant equations
a ^ b = ai ei ^ bj ej = aibj (ei ^ ei) = aibj εijk ek
Where I'm following his notation that ^ represents the cross product of the two vectors
3. The attempt at a solution
Now, just going off what I have seen so far in the handout he has posted, I believe the answer to be
εijkaiaj = (a ^ a)k or, εijkaiaj is the kth component of a ^ b and because the expression is a vector crossed with itself it is equal to zero
But what does it mean to be the kth component of a cross product? Honestly I'm working backward from a similar to an example he has in the handout and making the assumption that the reason the ek component is absent from the expression is because it is the kth component of the cross product, but from what I have to reference I cannot say with any degree of certainty if that is true and it makes me uncomfortable. Any help is greatly appreciated.
2. Sep 14, 2014
### pasmith
For a vector, $a_k$ is the component corresponding to $\mathbf{e}_k$. Thus $\mathbf{a} = a_1 \mathbf{e}_1 + a_2 \mathbf{e}_2 + a_3\mathbf{e}_3$. If you work out the cross product of $\mathbf{a}$ and $\mathbf{b}$ you'll find that the $\mathbf{e}_1$ component is $a_2b_3 - a_3b_2 = \epsilon_{ij1}a_i b_j$ and similarly for the other components. Thus $\epsilon_{ijk}a_i b_j$ is the $\mathbf{e}_k$ component of $\mathbf{a} \times \mathbf{b}$.
You can get that $\epsilon_{ijk}a_ia_j = 0$ more easily by observing that swapping the dummy indices $i$ and $j$ changes the sign of $\epsilon_{ijk}$ but doesn't change the sign of $a_ia_j$; thus $\epsilon_{ijk}a_ia_j = \epsilon_{jik}a_ja_i = -\epsilon_{ijk}a_ia_j = 0$. The same argument shows that if $T_{ij}$ is any symmetric tensor then $\epsilon_{ijk}T_{ij} = 0$.
3. Sep 14, 2014
### rcummings89
pasmith,
Thank you for clarifying, that definitely helps! But it does bring up another question for me though; again, I'm in the early stages of learning about this notation, and know that switching the indices changes εijk to εjik = -εijk, but why does it equal zero? |
Edit (2008-11-09): Robert Bradshaw posted a patch to my code and the Cython implementation is now a lot faster. Click here to read more.
In a comment on a recent post, Robert Samal asked how Cython compares to C++. The graph below shows a comparison of a greedy critical set solver written in Cython and C++ (both use a brute force, naive, non-randomised implementation of a depth first search):
So things look good until n = 10. In defence of Cython, I must point out that my implementation was a first attempt and I am by no means an expert on writing good Cython code. Also, the Cython code is probably fast enough – in my experience, solving problems (computationally) for latin squares of order 10 is futile, so the code is more convenient for testing out small ideas.
edit: the code is here
edit: Robert’s code is here http://sage.math.washington.edu/home/robertwb/cython/scratch/cython-latin/
Date: 2008-03-04 05:40:32 UTC
Author: Mike Hansen
You should post the Cython and C++ code because it looks like there maybe some obvious fixes to the Cython to make it behave better.
Date: 2008-03-04 21:01:39 UTC
Author: Robert Samal
Does somebody else have some experience in how cython compares
with C/C++? Every once in a while I need to do some computation (something NP-complete or worse in general, so it
usually ends up as an ugly backtracking). I’d be happy to do everything from within Sage (and using python/cython), but I’m not sure, if it is fast enough (or if it getting fast enough, I suppose that cython is improving gradually).
Date: 2008-07-07 11:59:22 UTC
Author: Alexandre Delattre
Hi,
After looking quickly into the code, I’m pretty sure some overhead is caused by the __getitem__ and __setitem__ methods, you use to override the [] operator.
When calling L[i, j] (or L[i, j] = x), those special methods are resolved at runtime and hence involve additional python mechanism. While they make the code readable, you lose the interest of “cdef” methods which are called much faster.
IMO, a good compromise would be to put the code in __getitem__ into a regular ‘cdef getitem()’ method, then make __getitem__ as a wrapper of the regular method:
and replace the L[i, j] by L.getitem(i, j) in your cython code.
Also put “void” return type on cdef method that returns nothing could help a bit.
I’ll try to make these changes and run the benchmark again.
Date: 2008-11-08 15:12:21 UTC
This graph looked pretty depressing, so I made some optimizations to your code (basically the ones suggested above, and a couple of other glaring things that stood out). The algorithm is still completely the same, and I didn’t do any code re-factoring other than __getitem__/__setitem__, just mostly typing things here and there. It’s now faster than c++ on my machine for the whole range graphed above (and much faster for small inputs).
Code and diff up at http://sage.math.washington.edu/home/robertwb/cython-latin/
Date: 2008-11-11 17:24:47 UTC
Author: Ben Racine
Any chance that we might see a plot of the improved data… wouldn’t want people to come here and only see the ‘depressing’ data.
Date: 2008-11-11 17:27:12 UTC
Author: Ben Racine
Nevermind, I now see the new results up one level.
Date: 2011-09-14 02:19:56 UTC
Author: Alex Quinn
http://carlo-hamalainen.net/blog/?p=35
Same for the link to the motivation (“recent post”):
http://carlo-hamalainen.net/blog/?p=12
Are these viewable elsewhere?
Thanks a lot for doing this and posting it! Very helpful in any case.
Date: 2011-09-14 02:22:59 UTC
Author: Alex Quinn
Found it! Here’s the post with the improved data:
http://carlo-hamalainen.net/blog/2008/11/09/cython-vs-c-improved/
Date: 2011-09-14 03:47:40 UTC
Author: Alex Quinn |
RUS ENG JOURNALS PEOPLE ORGANISATIONS CONFERENCES SEMINARS VIDEO LIBRARY PACKAGE AMSBIB
General information Latest issue Archive Impact factor Search papers Search references RSS Latest issue Current issues Archive issues What is RSS
Model. Anal. Inform. Sist.: Year: Volume: Issue: Page: Find
Model. Anal. Inform. Sist., 2018, Volume 25, Number 1, Pages 71–82 (Mi mais610)
Dynamical Systems
Features of the local dynamics of the opto-electronic oscillator model with delay
E. V. Grigoryevaa, S. A. Kashchenkob, D. V. Glazkovb
a Belarus Economic State University, 26 Partizanski Av., Minsk 220070, Belarus
b P.G. Demidov Yaroslavl State University, 14 Sovetskaya str., Yaroslavl 150003, Russia
Abstract: We consider electro-optic oscillator model which is described by a system of the delay differential equations (DDE). The essential feature of this model is a small parameter in front of a derivative that allows us to draw a conclusion about the action of processes with different order velocities. We analyse the local dynamics of a singularly perturbed system in the vicinity of the zero steady state. The characteristic equation of the linearized problem has an asymptotically large number of roots with close to zero real parts while the parameters are close to critical values. To study the existent bifurcations in the system, we use the method of the behaviour constructing special normalized equations for slow amplitudes which describe of close to zero original problem solutions. The important feature of these equations is the fact that they do not depend on the small parameter. The root structure of characteristic equation and the supercriticality order define the kind of the normal form which can be represented as a partial differential equation (PDE). The role of the ”space” variable is performed by ”fast” time which satisfies periodicity conditions. We note fast response of dynamic features of normalized equations to small parameter fluctuation that is the sign of a possible unlimited process of direct and inverse bifurcations. Also, some obtained equations possess the multistability feature.
Keywords: differential equation, local dynamics, small parameter, asymptotics, bifurcation, normal form, boundary value problem.
Funding Agency Grant Number Ministry of Education and Science of the Russian Federation 1.10160.2017/5.1 This work was carried out within the framework of the state programme of the Ministry of Education and Science of the Russian Federation, project ¹ 1.10160.2017/5.1.
DOI: https://doi.org/10.18255/1818-1015-2018-1-71-82
Full text: PDF file (573 kB)
References: PDF file HTML file
UDC: 517.929
Citation: E. V. Grigoryeva, S. A. Kashchenko, D. V. Glazkov, “Features of the local dynamics of the opto-electronic oscillator model with delay”, Model. Anal. Inform. Sist., 25:1 (2018), 71–82
Citation in format AMSBIB
\Bibitem{GriKasGla18} \by E.~V.~Grigoryeva, S.~A.~Kashchenko, D.~V.~Glazkov \paper Features of the local dynamics of the opto-electronic oscillator model with delay \jour Model. Anal. Inform. Sist. \yr 2018 \vol 25 \issue 1 \pages 71--82 \mathnet{http://mi.mathnet.ru/mais610} \crossref{https://doi.org/10.18255/1818-1015-2018-1-71-82} \elib{http://elibrary.ru/item.asp?id=32482540} |
# The Square of A Complex Number
Algebra Level 2
$\large\displaystyle \alpha = \frac{1}{ \sqrt{2}} + \frac{1}{ \sqrt{2}}i \quad , \quad \alpha^2 = \, ?$
Notation: $$i$$ is the imaginary number $$\sqrt{-1}$$.
× |
Unity [web] Not Sure What Language Makes The Most Sense
This topic is 3650 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.
Recommended Posts
I have no programming experience. I've been writing HTML for the last decade and worked loosely with web related languages like XML, CSS, PHP, and javascript. Note that I cannot write anything in any of these. I've successfully modify existing code only and even then such modifications were generally minor. I want to create a website that will have a member registry that will include names, addresses, personal details, and numerous other fields. Eventually there will be tens of thousands of people within it. I'd like to create a simpler form of MySpace for users to create their own small websites within the community which would include custom backgrounds, wiki's, blogs, and several "user friendly" submission systems for things like news, journal entries, wiki entries, etc... I assume this will require a database and that is something I know nothing about. I've worked with them in the past when I installed and ran phpBB forums but all of the hard work was pretty much taken care of by the installer. Other than this website I do not plan on "programming" anything. If I can avoid learning multiple languages to accomplish this I would like to. This is definitely a situation where I would like to take the path of least resistance. I'm just not sure what that route is. C# and ASP? Java? CSS and XML? I am willing to put in the time but would like to avoid wasting time wherever possible. What would be the best route to go? Any suggestions would be most appreciated.
Share on other sites
LAMP (Linux, Apache, MySQL, PHP) aka all open source
or you could go with ASP and MySQL
or RubyOnRails.
But making a myspace website could take alot of work. You would probably be better of buying a social networking website creator that makes it all for you. Think i have seen them for about $160-$200.
http://www.alstrasoft.com/efriends.htm
Share on other sites
The problem with social networking software is that it is very much an "all in one" package. I only need a couple of the features of these massive packages and most of those features themselves are too feature rich. I think a custom tailored website would be much more effective for what I will be using it for. If using ASP will I need to learn C# as well? Is ASP the kind of thing I could just pick up without learning a precursor language (like CSS is to HTML)?
Share on other sites
Quote:
Original post by modmiddyIf using ASP will I need to learn C# as well? Is ASP the kind of thing I could just pick up without learning a precursor language (like CSS is to HTML)?
Very rougly put:
When developing an ASP.NET page your logic code is either in C# or VB, or both. The markup language you'll be using is a superset of HTML.
A very good place to get started is the Official Microsoft ASP.NET Site. I recommend you check out their beginner videos; there are 14 videos and they are about half an hour each, so they cover a lot of ground.
Share on other sites
Websites where users can create their own content are not entirely trivial. If you don't want to use programming later on, I suggest you pay somebody else to do it, or find an existing system. Learning how to program well enough to create something coherent will be a bigger task than using that knowledge to create the actual website.
Whatever you do, if you make your own system, make sure to ask about how to properly secure all those people's data, or it will be exploited.
• 12
• 10
• 10
• 11
• 18
• Similar Content
• Hello. I'm newby in Unity and just start learning basics of this engine. I want to create a game like StackJump (links are below). And now I wondering what features do I have to use to create such my game. Should I use Physics engine or I can move objects changing transform manually in Update().
If I should use Physics can you in several words direct me how can I implement and what I have to use. Just general info, no need for detailed description of developing process.
Game in PlayMarket
Video of the game
• By GytisDev
Hello,
without going into any details I am looking for any articles or blogs or advice about city building and RTS games in general. I tried to search for these on my own, but would like to see your input also. I want to make a very simple version of a game like Banished or Kingdoms and Castles, where I would be able to place like two types of buildings, make farms and cut trees for resources while controlling a single worker. I have some problem understanding how these games works in the back-end: how various data can be stored about the map and objects, how grids works, implementing work system (like a little cube (human) walks to a tree and cuts it) and so on. I am also pretty confident in my programming capabilities for such a game. Sorry if I make any mistakes, English is not my native language.
• By Ovicior
Hey,
So I'm currently working on a rogue-like top-down game that features melee combat. Getting basic weapon stats like power, weight, and range is not a problem. I am, however, having a problem with coming up with a flexible and dynamic system to allow me to quickly create unique effects for the weapons. I want to essentially create a sort of API that is called when appropriate and gives whatever information is necessary (For example, I could opt to use methods called OnPlayerHit() or IfPlayerBleeding() to implement behavior for each weapon). The issue is, I've never actually made a system as flexible as this.
My current idea is to make a base abstract weapon class, and then have calls to all the methods when appropriate in there (OnPlayerHit() would be called whenever the player's health is subtracted from, for example). This would involve creating a sub-class for every weapon type and overriding each method to make sure the behavior works appropriately. This does not feel very efficient or clean at all. I was thinking of using interfaces to allow for the implementation of whatever "event" is needed (such as having an interface for OnPlayerAttack(), which would force the creation of a method that is called whenever the player attacks something).
Here's a couple unique weapon ideas I have:
Explosion sword: Create explosion in attack direction.
Cold sword: Chance to freeze enemies when they are hit.
Electric sword: On attack, electricity chains damage to nearby enemies.
I'm basically trying to create a sort of API that'll allow me to easily inherit from a base weapon class and add additional behaviors somehow. One thing to know is that I'm on Unity, and swapping the weapon object's weapon component whenever the weapon changes is not at all a good idea. I need some way to contain all this varying data in one Unity component that can contain a Weapon field to hold all this data. Any ideas?
I'm currently considering having a WeaponController class that can contain a Weapon class, which calls all the methods I use to create unique effects in the weapon (Such as OnPlayerAttack()) when appropriate.
• Hi fellow game devs,
First, I would like to apologize for the wall of text.
As you may notice I have been digging in vehicle simulation for some times now through my clutch question posts. And thanks to the generous help of you guys, especially @CombatWombat I have finished my clutch model (Really CombatWombat you deserve much more than a post upvote, I would buy you a drink if I could ha ha).
Now the final piece in my vehicle physic model is the differential. For now I have an open-differential model working quite well by just outputting torque 50-50 to left and right wheel. Now I would like to implement a Limited Slip Differential. I have very limited knowledge about LSD, and what I know about LSD is through readings on racer.nl documentation, watching Youtube videos, and playing around with games like Assetto Corsa and Project Cars. So this is what I understand so far:
- The LSD acts like an open-diff when there is no torque from engine applied to the input shaft of the diff. However, in clutch-type LSD there is still an amount of binding between the left and right wheel due to preload spring.
- When there is torque to the input shaft (on power and off power in 2 ways LSD), in ramp LSD, the ramp will push the clutch patch together, creating binding force. The amount of binding force depends on the amount of clutch patch and ramp angle, so the diff will not completely locked up and there is still difference in wheel speed between left and right wheel, but when the locking force is enough the diff will lock.
- There also something I'm not sure is the amount of torque ratio based on road resistance torque (rolling resistance I guess)., but since I cannot extract rolling resistance from the tire model I'm using (Unity wheelCollider), I think I would not use this approach. Instead I'm going to use the speed difference in left and right wheel, similar to torsen diff. Below is my rough model with the clutch type LSD:
speedDiff = leftWheelSpeed - rightWheelSpeed; //torque to differential input shaft. //first treat the diff as an open diff with equal torque to both wheels inputTorque = gearBoxTorque * 0.5f; //then modify torque to each wheel based on wheel speed difference //the difference in torque depends on speed difference, throttleInput (on/off power) //amount of locking force wanted at different amount of speed difference, //and preload force //torque to left wheel leftWheelTorque = inputTorque - (speedDiff * preLoadForce + lockingForce * throttleInput); //torque to right wheel rightWheelTorque = inputTorque + (speedDiff * preLoadForce + lockingForce * throttleInput); I'm putting throttle input in because from what I've read the amount of locking also depends on the amount of throttle input (harder throttle -> higher torque input -> stronger locking). The model is nowhere near good, so please jump in and correct me.
Also I have a few questions:
- In torsen/geared LSD, is it correct that the diff actually never lock but only split torque based on bias ratio, which also based on speed difference between wheels? And does the bias only happen when the speed difference reaches the ratio (say 2:1 or 3:1) and below that it will act like an open diff, which basically like an open diff with an if statement to switch state?
- Is it correct that the amount of locking force in clutch LSD depends on amount of input torque? If so, what is the threshold of the input torque to "activate" the diff (start splitting torque)? How can I get the amount of torque bias ratio (in wheelTorque = inputTorque * biasRatio) based on the speed difference or rolling resistance at wheel?
- Is the speed at the input shaft of the diff always equals to the average speed of 2 wheels ie (left + right) / 2? |
Skip to contents
Calculates for each resource or resource-activity combination in what percentage of cases it is present.
Next to the resource_frequency, the involvement of resources in cases can be of interest to, e.g., decide how "indispensable" they are. This metric is provided on three levels of analysis, which are the cases, the resources, and the resource-activity combinations.
## Usage
resource_involvement(
log,
level = c("case", "resource", "resource-activity"),
append = deprecated(),
append_column = NULL,
sort = TRUE,
eventlog = deprecated()
)
# S3 method for log
resource_involvement(
log,
level = c("case", "resource", "resource-activity"),
append = deprecated(),
append_column = NULL,
sort = TRUE,
eventlog = deprecated()
)
# S3 method for grouped_log
resource_involvement(
log,
level = c("case", "resource", "resource-activity"),
append = deprecated(),
append_column = NULL,
sort = TRUE,
eventlog = deprecated()
)
## Arguments
log
log: Object of class log or derivatives (grouped_log, eventlog, activitylog, etc.).
level
character (default "case"): Level of granularity for the analysis: "case" (default), "resource", or "resource-activity". For more information, see vignette("metrics", "edeaR") and 'Details' below.
append
logical (default FALSE) : The arguments append and append_column have been deprecated in favour of augment.
Indicating whether to append results to original log. Ignored when level is "log" or "trace".
append_column
The arguments append and append_column have been deprecated in favour of augment.
Which of the output columns to append to log, if append = TRUE. Default column depends on chosen level.
sort
logical (default TRUE): Sort output on count. Only for levels with frequency count output.
eventlog
; please use log instead.
## Details
Argument level has the following options:
• On "case" level, the absolute and relative number of distinct resources executing activities in each case is calculated, to get an overview of which cases are handled by a small amount of resources and which cases need more resources, indicating a higher level of variance in the process.
• On "resource" level, this metric provides the absolute and relative number of cases in which each resource is involved, indicating which resources are more "necessary" within the process than the others.
• On "resource-activity" level, this metric provides a list of all resource-activity combinations with the absolute and relative number of cases in which each resource-activity combination is involved.
## Methods (by class)
• resource_involvement(log): Computes the resource involvement for a log.
• resource_involvement(grouped_log): Computes the resource involvement for a grouped_log.
## References
Swennen, M. (2018). Using Event Log Knowledge to Support Operational Exellence Techniques (Doctoral dissertation). Hasselt University.
## See also
resource_frequency
Other metrics: activity_frequency(), activity_presence(), end_activities(), idle_time(), number_of_repetitions(), number_of_selfloops(), number_of_traces(), processing_time(), resource_frequency(), resource_specialisation(), start_activities(), throughput_time(), trace_coverage(), trace_length() |
# Is the scalar field operator self-adjoint?
In A. Zee's QFT in a Nutshell, he defines the field for the Klein-Gordon equation as
$$\tag{1}\varphi(\vec x,t) = \int\frac{d^Dk}{\sqrt{(2\pi)^D2\omega_k}}[a(\vec k)e^{-i(\omega_kt-\vec k\cdot\vec x)} + a^\dagger(\vec k)e^{i(\omega_kt-\vec k\cdot\vec x)}]$$
When calculating $$\pi=\partial_0\varphi^\dagger$$, I came to
$$\tag{2}\varphi^\dagger(\vec x,t) = \int\frac{d^Dk}{\sqrt{(2\pi)^D2\omega_k}}[a^\dagger(\vec k)e^{i(\omega_kt-\vec k\cdot\vec x)} + a(\vec k)e^{-i(\omega_kt-\vec k\cdot\vec x)}]$$
But this would imply that $$\varphi^\dagger=\varphi$$. Is that correct?
(Intuitively it would make sense, because in QM we also consider self-adjoint operators.)
If it's correct, then why do we explicitly write $$\pi=\partial_0\varphi^\dagger$$ instead of just $$\pi=\partial_0\varphi$$? Why bother distinguishing $$\varphi$$ from $$\varphi^\dagger$$ at all?
In case it is not correct, then the first two equations of this answer are most likely wrong.
• For neutral fields that is correct. If you want to keep the formalism as general as possible and include charged particles you really want to allow for non-selfadjoint fields. – Phoenix87 Nov 11 '15 at 11:07
• Oh OK, makes sense. (if you want some rep, write that as an answer). – Bass Nov 11 '15 at 11:09
For a real scalar field I think what you have written is correct..But if you want to describe a complex scalar field then we need to distinguish between $\phi$ and $\phi^{\dagger}$... |
### How high must one be for the curvature of the earth to be visible to the eye?
• I would like to ask that at what distance from the Earth's surface the curvature of the Earth is visible. What layer of the atmosphere is this?
I've noticed that at the height of 9-12 Km (the view from from aeroplanes) it is not visible.
5 years ago
Depends on your eye. You can realise the curvature of the Earth by just going to the beach. Last summer I was on a scientific cruise in the Mediterranean. I took two pictures of a distant boat, within an interval of a few seconds: one from the lowest deck of the ship (left image), the other one from our highest observation platform (about 16 m higher; picture on the right):
A distant boat seen from 6 m (left) and from 22 m (right) above the sea surface. This boat was about 30 km apart. My pictures, taken with a 30x optical zoom camera.
The part of the boat that is missing in the left image is hidden by the quasi-spherical shape of the Earth. In fact, if you would know the size of the boat and its distance, we could infer the radius of the Earth. But since we already know this, let's do it the other way around and deduce the distance to which we can see the full boat:
The distance $d$ from an observer $O$ at an elevation $h$ to the visible horizon follows the equation (adopting a spherical Earth):
$$d=R\times\arctan\left(\frac{\sqrt{2\times{R}\times{h}}}{R}\right)$$
where $d$ and $h$ are in meters and $R=6370*10^3m$ is the radius of the Earth. The plot is like this:
Distance of visibility d (vertical axis, in km), as a function of the elevation h of the observer above the sea level (horizontal axis, in m).
From just 3 m above the surface, you can see the horizon 6.2 km apart. If you are 30 m high, then you can see up to 20 km far away. This is one of the reasons why the ancient cultures, at least since the sixth century BC, knew that the Earth was curved, not flat. They just needed good eyes. You can read first-hand Pliny (1st century) on the unquestionable spherical shape of our planet in his Historia Naturalis.
Cartoon defining the variables used above. d is the distance of visibility, h is the elevation of the observer O above the sea level.
But addressing more precisely the question. Realising that the horizon is lower than normal (lower than the perpendicular to gravity) means realising the angle ($gamma$) that the horizon lowers below the flat horizon (angle between $OH$ and the tangent to the circle at O, see cartoon below; this is equivalent to gamma in that cartoon). This angle depends on the altitude $h$ of the observer, following the equation:
$$\gamma=\frac{180}{\pi}\times\arctan\left(\frac{\sqrt{2\times{R}\times{h}}}{R}\right)$$
where gamma is in degrees, see the cartoon below.
This results in this dependence between gamma (vertical axis) and h (horizontal axis):
Angle of the horizon below the flat-Earth horizon (gamma, in degrees, on the vertical axis of this plot) as a function of the observer's elevation h above the surface (meters). Note that the apparent angular size of the Sun or the Moon is around 0.5 degrees..
So, at an altitude of only 290 m above the sea level you can already see 60 km far and the horizon will be lower than normal by the same angular size of the sun (half a degree). While normally we are no capable of feeling this small lowering of the horizon, there is a cheap telescopic device called levelmeter that allows you to point in the direction perpendicular to gravity, revealing how lowered is the horizon when you are only a few meters high.
When you are on a plane ca. 10,000 m above the sea level, you see the horizon 3.2 degrees below the astronomical horizon (O-H), this is, around 6 times the angular size of the Sun or the Moon. And you can see (under ideal meteorological conditions) to a distance of 357 km. Felix Baumgartner roughly doubled this number but the pictures circulated in the news were taken with very wide angle, so the ostensible curvature of the Earth they suggest is mostly an artifact of the camera, not what Felix actually saw.
This ostensible curvature of the Earth is mostly an artifact of the camera's wide-angle objective, not what Felix Baumgartner actually saw.
Your answer has let me deep into the entrails of the internet. Essentially I found https://en.wikipedia.org/wiki/Spherical_Earth#Hellenistic_astronomy and http://www.mse.berkeley.edu/faculty/deFontaine/flatworlds.html as I was unsure whether the curvature of Earth by observing ships was measurable in ancient times. I think now that the ships and Aristotle's stars that become invisible as one wanders south must have give some hard hints that Earth is spherical, Eratothenes later then measured its curvature. Also the guy named https://en.wikipedia.org/wiki/Strabo
You have answered it very nicely....But since, I have already accepted an answer above so I can't accept yours, But this answer is not less than an acceptable answer.... Thanks
Accepting yours as well.. :)
I can accept only one technically...but ya I am accpeting urs verbally
@Mani you can, actually, "change which answer is accepted, or simply un-accept the answer, at any time". You *may* (but are not, by any means, required) to change the accepted answer if a newer, better answer comes along later.
@DrGC, Re "knew that the Earth was curved, not flat", but doesn't that only prove *that* part of Earth is curved and Not the entire Earth?
Well, not if they were observing the same everywhere they went. |
GMAT Question of the Day - Daily to your Mailbox; hard ones only
It is currently 17 Nov 2018, 22:01
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
## Events & Promotions
###### Events & Promotions in November
PrevNext
SuMoTuWeThFrSa
28293031123
45678910
11121314151617
18192021222324
2526272829301
Open Detailed Calendar
• ### FREE Quant Workshop by e-GMAT!
November 18, 2018
November 18, 2018
07:00 AM PST
09:00 AM PST
Get personalized insights on how to achieve your Target Quant Score. November 18th, 7 AM PST
• ### How to QUICKLY Solve GMAT Questions - GMAT Club Chat
November 20, 2018
November 20, 2018
09:00 AM PST
10:00 AM PST
The reward for signing up with the registration form and attending the chat is: 6 free examPAL quizzes to practice your new skills after the chat.
# If machine A and machine B can finish the task in 4 hours when working
Author Message
TAGS:
### Hide Tags
Intern
Joined: 18 Apr 2013
Posts: 34
If machine A and machine B can finish the task in 4 hours when working [#permalink]
### Show Tags
06 Jul 2017, 06:59
2
00:00
Difficulty:
15% (low)
Question Stats:
74% (00:50) correct 26% (01:13) wrong based on 93 sessions
### HideShow timer Statistics
If machine A and machine B can finish the task in 4 hours when working together at their constant rates, in how many hours can machine B finish the task alone?
1) Machine A can finish the task in 6 hours alone
2) The hours that it takes machine B to finish the task alone is 6 hours longer than the hours that it takes machine A to finish the task alone
Manager
Joined: 23 Jul 2015
Posts: 156
Re: If machine A and machine B can finish the task in 4 hours when working [#permalink]
### Show Tags
06 Jul 2017, 09:53
Given: Ra +Rb = 1/4
A. Ra= 1/6 --> 1/6 + 1/Tb = 1/4 --> Suff
B. Tb = Ta + 6
Rb = 1/Tb
Ra = 1/(Tb -6)
1/(Tb-6) + 1/Tb = 4
==> (Tb-12)(Tb-2)=0
Tb=2 is invalid for Ta cannot be negative (Ta=Tb -6)
Ans D
Intern
Joined: 04 Jan 2016
Posts: 36
Location: United States
Concentration: General Management
Schools: Tuck '20 (M)
GMAT 1: 750 Q48 V44
GPA: 3.3
Re: If machine A and machine B can finish the task in 4 hours when working [#permalink]
### Show Tags
06 Jul 2017, 11:08
By the work equation, we have
$$\frac{1}{x} + \frac{1}{y} = \frac{1}{4}$$
Since the question is asking for $$y$$, we only need to eliminate one of the variables to solve the problem.
(1) $$\frac{1}{6} + \frac{1}{y} = \frac{1}{4}$$
Sufficient: This gives us $$x$$, so we can now solve for $$y$$.
(2) $$\frac{1}{x} + \frac{1}{x+6} = \frac{1}{4}$$
Sufficient: This gives us $$y$$ in terms of $$x$$ so we can now solve for $$x$$ and substitute to solve $$y = x + 6$$.
Therefore, D
Non-Human User
Joined: 09 Sep 2013
Posts: 8803
Re: If machine A and machine B can finish the task in 4 hours when working [#permalink]
### Show Tags
26 Oct 2018, 03:56
Hello from the GMAT Club BumpBot!
Thanks to another GMAT Club member, I have just discovered this valuable topic, yet it had no discussion for over a year. I am now bumping it up - doing my job. I think you may find it valuable (esp those replies with Kudos).
Want to see all other topics I dig out? Follow me (click follow button on profile). You will receive a summary of all topics I bump in your profile area as well as via email.
_________________
Re: If machine A and machine B can finish the task in 4 hours when working &nbs [#permalink] 26 Oct 2018, 03:56
Display posts from previous: Sort by |
# Python Program to Read a Number n And Print the Series "1+2+…..+n= "
PythonServer Side ProgrammingProgramming
When it is required to display the sum all the natural numbers within a given range, a method can be defined that uses a loop to iterate over the elements, and returns the sum of these numbers as output.
Below is a demonstration of the same −
## Example
Live Demo
def sum_natural_nums(val):
my_sum = 0
for i in range(1, val + 1):
my_sum += i * (i + 1) / 2
return my_sum
val = 9
print("The value is ")
print(val)
print("The sum of natural numbers upto 9 is : ")
print(sum_natural_nums(val))
## Output
The value is
9
The sum of natural numbers upto 9 is :
165.0
## Explanation
• A method named ‘sum_natural_nums’ is defined that takes a number as parameter.
• A sum value is defined as 0.
• A loop is iterated over the number passed as a parameter.
• The sum is incremented every time a number is encountered.
• This is returned as output.
• The value for number of natural numbers whose sum needs to be found is defined.
• The method is called by passing this number as a parameter.
• The relevant output is displayed on the console.
Published on 16-Apr-2021 12:05:14 |
# DITTO
## DITTO Definition / DITTO Means
The exact definition of DITTO is “The same, me too, I agree”.
## What is DITTO?
DITTO is “The same, me too, I agree”.
## The Meaning of DITTO
DITTO means “The same, me too, I agree”.
## What does DITTO mean?
DITTO is an acronym, abbreviation or slang word which means “The same, me too, I agree”. This Page is dedicated to all those internet users who are looking for DITTO Definition, The Meaning of DITTO and What does DITTO mean?. You can checkout the information shared above for acronym DITTO and other 9000+ slang words shared on Web Acronym. |
# What would be some fun/unique and relatively easy puzzles for people to put words together?
I am playing a table top role playing game and want my players to discover two words by putting together different hints. Let's assume the words are "devourer" and "avocado".
I was thinking of a simple anagram, such that for the word devourer maybe the players would discover the letters o e v e r u r d over parts of the game and they need to put it together. But that seems a bit boring to me. I believe the level of difficult for anagram is appropriate however, and it is important that the players can figure it out. I'm assuming this is not very difficult, correct?
I was thinking for avocado the players could discover 5A 6D 2V 3O 4C 1A 7O over a long period of time, and that they could put this together to spell out AVOCADO. Again however this seems a bit boring, although maybe more unique than an anagram.
The players will be discovering these hints in a book that can include almost anything a book can include, with the caveat that the book is in their imagination and not an actual book (this is a role playing game). For example I could say to the player that he discovers "A1" in the book or sees a serpent or something, but can't present an actual image.
I should add that I could present an actual cool image to my players if I think it is worth it, but the caveat is that the player needs to discover these words over 12 months and if I can't present the entire puzzle at once.
What would be some fun/unique and relatively easy puzzles for people to put words together?
• Has a useful answer been given? If so, please don't forget to $\color{green}{\checkmark \small\text{Accept}}$ it :) – Rubio Jun 19 '19 at 12:02
How about a simple one and the hints are words instead of letters? (And these words hide the letters for the real words.)
Here is one possible way.
Suppose you want to hide a word AVOCADO. There are $$7$$ letters. You then need to create $$7$$ words that start with letter 'A', 'B', 'C', up to 'G'; and consecutively end with letter 'A', 'V', 'O', up to 'O'.
For examples, you may hint these words:
ACACIA
BREZHNEV
CRESCENDO
EUPHORIA
FLABBERGASTED
GIZMO
This works better if you also tell beforehand that the secret word consists of $$7$$ letters (and none of your hints have $$7$$ letters).
The longer secret word, the better. As if they are progressing, they may start to notice that all hint words are having different first letters, especially if they are listing them lexicographically.
Alternatively, you may use exactly $$26$$ hint words so each alphabet will be used. The secret word then may not have $$26$$ letters, you may use "THEREALSECRETWORDISAVOCADO" as the ending letters.
Moreover, they also don't need to find all hint words. It may be sufficient to notice the pattern at some point, and try to fill in the blank (for example, they haven't got the the word EUPHORIA).
Hope this will help you to give some ideas!
(Kudos to this tool for finding the words.)
• Thank you! This is a good idea. I am concerned it may be too difficult, but I'm sure I could work with that! – Behacad Jun 14 '19 at 22:22
# Simplicity isn't a bad thing
Don't get caught up in wanting to make your puzzle as complicated as possible while still being solvable. This puzzle is lent a great deal of uniqueness by the fact that they'll be putting it together over the course of 18 months. Don't go overboard in making it more than it needs to be. I don't know what level of puzzle your players are used to from you, but if I put together clues over the course of an 18-month campaign, I'd feel accomplished in solving it even if the actual puzzle-solving step was as simple as an anagram.
Also consider the fact that the added complexity from the extremely long clue-gathering time may obfuscate the puzzle more than you think. It's easy to take the high-level view now, but how sure are you that your players will even recognize these clues as things that should be saved to be put together at the end? I think your main concern needs to be making sure you're leaving enough breadcrumbs to make your players aware they're making progress in a giant puzzle, not complicating it.
• I don't want the puzzle to be complicated, I made that clear by stating I want them to solve it. I did use the term simple to describe the AVOCADO one, and perhaps that was a mistake. I am more concerned about this being boring. Is an anagram really the best I can come up with? Everyone has done it. I'd rather they do something just as easy that is not something they've done before. I am looking for fun options or puzzles that perhaps they haven't done before. – Behacad Jun 14 '19 at 16:09 |
Bow Tie
Show how this pentagonal tile can be used to tile the plane and describe the transformations which map this pentagon to its images in the tiling.
Weekly Problem 18 - 2007
A regular pentagon together with three sides of a regular hexagon form a cradle. What is the size of one of the angles?
Hexapentagon
Weekly Problem 53 - 2007
The diagram shows a regular pentagon and regular hexagon which overlap. What is the value of x?
U in a Pentagon
Stage: 3 Short Challenge Level:
The diagram shows a regular pentagon $PQRST$. The lines $QS$ and $RT$ meet at $U$. What is the size of angle $PUR$?
If you liked this question, here is an NRICH task which challenges you to use similar mathematical ideas.
This problem is taken from the UKMT Mathematical Challenges.
View the archive of all weekly problems grouped by curriculum topic
View the previous week's solution
View the current weekly problem |
# 2016 is absolutely awesome 11
Discrete Mathematics Level 5
Mary starts with one standard die on a table.
At each step, she rolls all the dice on the table: if all of them show a 6 on top, then she places one more die on the table; otherwise, she does nothing more on this step.
Let $$D$$ be the number of dice on the table after 2016 such steps.
What is the expected value (average value) of $$6^D$$?
× |
# 900800700600500400300200100
###### Question:
900 800 700 600 500 400 300 200 100
#### Similar Solved Questions
##### How would you carry out the following reactions?
How would you carry out the following reactions?...
##### For the arithmetic scquence with given hrst term 6 and common difference 5: its nth term Isit 10-th term is|51
For the arithmetic scquence with given hrst term 6 and common difference 5: its nth term Is it 10-th term is|51...
##### P92k M M -160k -24 ft-k 6. (Text 6.8-6) The member shown in Figure P6.8-6 is...
P92k M M -160k -24 ft-k 6. (Text 6.8-6) The member shown in Figure P6.8-6 is part of a braced frame. The axial load and end moments are based on service loads composed of equal parts dead load and live load. The frame analysis was performed consistent with the effective length method, so the flexura...
##### Dittman's Variety Store is completing the accounting process for the current year just ended, December 31....
Dittman's Variety Store is completing the accounting process for the current year just ended, December 31. The transactions during year have been journalized and posted. The following data with respect to adjusting entries are available: a. Wages earned by employees during December, unpaid and u...
##### You are an engineer in charge E designing new generation of elevators for prospective upgrade t0 the Empire State Building Before the state legislature votes on _ funding for the project; they would like You t0 prepare = report on the benefits of upgrading the elevators. One of the numbers that they have requested the time it will take the elevator t0 go from the ground dloor t0 the 102nd floor observatory: They are unlikely t0 approve the project unless the new elevators make the trip much fast
You are an engineer in charge E designing new generation of elevators for prospective upgrade t0 the Empire State Building Before the state legislature votes on _ funding for the project; they would like You t0 prepare = report on the benefits of upgrading the elevators. One of the numbers that they...
##### QUESTION 4 PARTNERSHIPS [20 MARKS REQUIRED Use the information provided below to prepare the Statement of Changes in Eq...
QUESTION 4 PARTNERSHIPS [20 MARKS REQUIRED Use the information provided below to prepare the Statement of Changes in Equity for the year ended 30 June 2014. Use the following format STATEMENT OF CHANGES IN EQUITY FOR THE YEAR ENDED 30 JUNE 2014 Fuji Film Total Capital Accounts Balance at 30 June 201...
##### What is the energy density in the electric field at the surface of a 2.70-cm-diameter sphere...
What is the energy density in the electric field at the surface of a 2.70-cm-diameter sphere charged to a potential of 2000V ? ______________ J/m3...
##### Iutmer huys" cowrb woals, and nensWin man who Ilas cows , qoats, ond' hens Find Forthe Ioplbar; press ALT+EIQ (PC) or ALT FN-F1O (Mac)choices Ifiat tho larmor hias
Iutmer huys" cowrb woals, and nensWin man who Ilas cows , qoats, ond' hens Find Forthe Ioplbar; press ALT+EIQ (PC) or ALT FN-F1O (Mac) choices Ifiat tho larmor hias...
##### Find the general solution of9y + 26y24y=0given that r = 2 is root of the characteristic cquation.0 y=C; eC2e" C3b) 0 y=C1C2 eY=C]88*+ +Cxey-C,e2*y-C,e2* C2C3Nonc of the above_
Find the general solution of 9y + 26y 24y=0 given that r = 2 is root of the characteristic cquation. 0 y=C; e C2e" C3 b) 0 y=C1 C2 e Y=C] 88*+ +Cxe y-C,e2* y-C,e2* C2 C3 Nonc of the above_...
##### Part CFor x = 2.00 m what is the radius of the third dark ring in the diffraction pattern?Express your answer to three significant figures and include the appropriate units:pAT3ValuecmSubmitPrevious Answers Request Answer
Part C For x = 2.00 m what is the radius of the third dark ring in the diffraction pattern? Express your answer to three significant figures and include the appropriate units: pA T3 Value cm Submit Previous Answers Request Answer...
##### A supermarket organisation buys in particular foodstuff from four suppliers A, B, C, D and subjects samples of this to regular tasting tests by expert panels_ Various characteristics are scored, and the total score for the product is recorded_ Four tasters b, €, d at four sessions obtained the results below: Analyse and comment on these:TasterSessionA: 21 B: 17 C: 18 D: 20 B: 20 D: 22 A: 23 C:19 C: 20 A: 24 D: 22 B: 19 D: 22 C: 21 B: 22 A: 26
A supermarket organisation buys in particular foodstuff from four suppliers A, B, C, D and subjects samples of this to regular tasting tests by expert panels_ Various characteristics are scored, and the total score for the product is recorded_ Four tasters b, €, d at four sessions obtained the...
##### Match the following terms with their descriptions: inversion, duplication, deletion, and translocation. 1. A piece of...
Match the following terms with their descriptions: inversion, duplication, deletion, and translocation. 1. A piece of a chromosome separates and then reattaches in the opposite orientation 2. A piece of a chromosome gets permanently removed 3. A piece of a chromosome gets attached to a nonhomologous...
##### 2) Infr - 2) < 0 if: a) x >3 b) x < 2 log42" then 4~J)U(2,0) d) (-0,-6) U (1,0) c)x > 2 d)* <3
2) Infr - 2) < 0 if: a) x >3 b) x < 2 log42" then 4 ~J)U(2,0) d) (-0,-6) U (1,0) c)x > 2 d)* <3...
##### 8. For each of the following, either draw a undirected graph satisfying the given criteria or...
8. For each of the following, either draw a undirected graph satisfying the given criteria or explain why it cannot be done. Your graphs should be simple, i.e. not having any multiple edges (more than one edge between the same pair of vertices) or self-loops (edges with both ends at the same vertex)...
##### SUppo5e thatthe proportionsblood phenotypesparticula populationtollonsAssumlng that the phenotypesrandomly lected IndluldualsIndependentanother; Ahatprobabllity that both Dhenocypes are(EnteranseeTour decmai Dlaces:Mhatthe probability that the phenotypesrandomy selected individuals match? Enter your answeriour decima places,
SUppo5e thatthe proportions blood phenotypes particula population tollons Assumlng that the phenotypes randomly lected Indlulduals Independent another; Ahat probabllity that both Dhenocypes are (Enter ansee Tour decmai Dlaces: Mhat the probability that the phenotypes randomy selected individuals mat...
##### 1. Find 10 in the network from following figure using superposition theorem only. 12 kn 120...
1. Find 10 in the network from following figure using superposition theorem only. 12 kn 120 6 mA 12 V 6 kn 6kn...
##### A company manufactures memory chips for microcomputers_ Based on data they produced the following price-demand function: d(x) 65 3x, 1<X<20, where xis number of memory chips in millions. D(x) is the price in dollars_The financial department for the company established the following cost function for producing and selling million memory chips: C(x) 120 15x million dollarsWrite the company's revenue function R(x) and indicate its domain Write profit function P(x) for producing and sell
A company manufactures memory chips for microcomputers_ Based on data they produced the following price-demand function: d(x) 65 3x, 1<X<20, where xis number of memory chips in millions. D(x) is the price in dollars_ The financial department for the company established the following cost funct...
##### Hello.. i have timer 555 ic working in astable mode .. i need the output to...
hello.. i have timer 555 ic working in astable mode .. i need the output to be: f=1.2Hz time low(t0) = 416.6666ms time high(t1)=416.6666ms what are the values of R1 , R2 and C ? Vcc 4 Output 7 3 NE555 2 2 10 nF...
##### Cepheid variable star is star whose brightness altemnately increases and decreases certain star; the interval between times maximum brightness 5.8 days. The average brightness this star 2.0 and its brightness changes by _0.25. In vierd these data the brightness the star tme where measuned days_ has been modeled bY the functionB(t) 2.0 0.25 sin) 5.8 (a) Find the rate of change of the brightness after daysFind correct to [wo decimal places, the rate increase after five days.
Cepheid variable star is star whose brightness altemnately increases and decreases certain star; the interval between times maximum brightness 5.8 days. The average brightness this star 2.0 and its brightness changes by _0.25. In vierd these data the brightness the star tme where measuned days_ has ...
...
##### When an archer pulls an arrow back in his bow, he is storing potential energy in...
When an archer pulls an arrow back in his bow, he is storing potential energy in the stretched bow. (a) Compute the potential energy stored in the bow, if the arrow of mass 5.05 10-2 kg leaves the bow with a speed of 32.0 m/s. Assume that mechanical energy is conserved. (b) What average force must t...
##### The remaining questions in this section relate to the following case: Ahmed has worked as a...
The remaining questions in this section relate to the following case: Ahmed has worked as a phlebotomist in the local hospital for the last 7 years. Last year, he began to complain of watery, nasal congestion and wheezing whenever he went to work. He suspected he was allergic to something at the hos...
##### -P-SPOR, esata 0/50 Submissions used Sketch the triangle. ZA-50°B 78, c-280 Solve the triangle using the...
-P-SPOR, esata 0/50 Submissions used Sketch the triangle. ZA-50°B 78, c-280 Solve the triangle using the Law of Sines. (Round side lengths to the nearest integer.)...
##### OCI is presented net of tax—show me an example of how the taxes impact the amount...
OCI is presented net of tax—show me an example of how the taxes impact the amount shown and state why “net of tax” makes sense....
##### Ablock 0/ weight % 20 0 Nei o ndhnlnze ndned plane, which mates angle = F40 ith resped lo Ihe horizontal as shown in the fqute (Figure IJA force 0l magnitude 08 N , applied parally Ihe Inchne jusl sutident lo Ful Ft oa up the Plane conaiantenandRevet Conslants Palodk TabaPJrAUne bla ct ioves 4 incline corislant speed What is the tota| work WcA:] done on ine Bock Wurcur Ihe block Meovts dislanc E0 Ihe Indne? Include oky tha wotk done nftat Ihe block has oturtod moving cotulani apoeu llalthu War
Ablock 0/ weight % 20 0 Nei o ndhnlnze ndned plane, which mates angle = F40 ith resped lo Ihe horizontal as shown in the fqute (Figure IJA force 0l magnitude 08 N , applied parally Ihe Inchne jusl sutident lo Ful Ft oa up the Plane conaiantenand Revet Conslants Palodk Taba PJrA Une bla ct ioves 4 ...
##### The graph is a transformation of one of the basic functions. Find the equation that defines...
The graph is a transformation of one of the basic functions. Find the equation that defines the function. - The equation is y=U. (Type an expression using x as the variable. Do not simplify.)... |
# Qubyte Codes
## Marqdown
Published
Markdown is the standard for writing in techie circles these days, but it's pretty minimal. For a readme it's all you need, but if you create a site around Markdown like I have then you pretty quickly bump into its limitations. Markdown is deliberately limited, so it's no fault of the language or its creator!
Nevertheless, over time I've added my own tweaks and extensions upon Markdown, so I've decided to document them, and name the dialect Marqdown. Naming may seem a little arrogant, but it's mostly to disambiguate what I'm writing with more common Markdown variants.
My variant is based on the default configuration provided by marked, with additions layered on top. This is mostly the original flavour of Markdown with a few deviations to fix broken behaviour. As I add features I'll document them in this post.
## Footnotes
I use footnotes[1] from time to time. The way I've implemented them makes the superscript a link to the footnote text, and the footnote text itself has a backlink to the superscript, so you can jump back to where you were.
The footnote in the previous paragraph is encoded like this:
I use footnotes[^][sparingly] from time to time.
This was an interesting feature to implement because it produces content out of the regular flow of the document. The markdown engine had to be abused a bit to create the superscript links first and keep a list of their footnote texts. Once the document is rendered, a post-render routine checks for any footnote texts, and when there's at least one it appends a section with an ordered list of footnotes. Another complication is index pages. For the blog posts index page only the first paragraph of each post is used, and footnote superscripts have to be removed from those.
## Languages
HTML supports language attributes. Most of the time a (well-built) page will have a single language attribute on the opening <html> tag itself, setting the language for the entire document.
I write notes in mixed English and Japanese as I learn the latter. When working with CJK text it's particularly important to give elements containing such text the appropriate language tag so that Chinese characters are properly rendered (there are divergences which are important).
I wrote a Markdown syntax extension to add these tags. Since my documents are mostly in English, this remains as the language attribute of each page as a whole. For snippets of Japanese I use the syntax extension, which looks like:
The text between here {ja:今日は} and here is in Japanese.
This snippet renders to:
<p>
The text between here <span lang="ja">今日は</span> and here is in Japanese.
</p>
Simple enough. The span is unavoidable because there is only text within it and text surrounding it. Where the renderer gets smart is in eliminating the span! If the span is the only child of its parent, the renderer eliminates the span by moving the language attribute to the parent. For example:
- English
- {ja:日本語}
- English
migrates the language attribute to the parent <li> to eliminate a span:
<ul>
<li>English</li>
<li lang="ja">日本語</li>
<li>English</li>
</ul>
Similarly, the renderer is smart enough to see when the span has only one child, and can move the language attribute to the child to eliminate the span. Example:
{ja:_すごい_}
migrates the language attribute to the spans only child, an <em>:
<em lang="ja">すごい</em>
This becomes particularly important in the case of my notes, where it's common to nest ruby elements inside these language wrappers. There's a ruby annotation in the next section, and you'll see the language attributes appear directly on the ruby element if you inspect it.
As with footnotes, the language attribute migration and span elimination is handled using JSDOM after a markdown document is rendered as part of a post-render routine. In the future I may look into adapting marked to render directly to JSDOM rather than to a string.
## Ruby annotations
I'm studying Japanese. It's pretty common to see annotations to help with the pronunciation of words containing Chinese characters. This could be because the text is intended for learners like me, but it's also common to see it for less common words, or where the reading of a word may be ambiguous.
These annotations typically look like kana rendered above or below the word (when Japanese is written left-to-right), or to one side (when Japanese is written from top to bottom). Ruby annotations are not unique to Japanese, but in the Japanese context they're called ()仮名(がな) (furigana), and you can see them right here in the Japanese text in this sentence! The code for it looks like this:
^振,ふ,り,,仮名,がな^
The delimiters are the carets, odd elements are regular text, and even elements are their annotations. So, goes above , nothing goes above (it's already a kana character), and がな goes above 仮名.
There are actually specific elements for handling ruby annotations, so what you see rendered is only from HTML and CSS! They're pretty fiddly to work with manually though, so this extension saves me a lot of time and saves me from lots of broken markup.
## Highlighted text
The syntax of this extension is borrowed from elsewhere (I didn't invent it). This addition allows me to wrap stuff in <mark> elements. By default, this is a bit like using a highlighter pen on text. The syntax looks like:
Boring ==important== boring again.
which renders to:
<p>
Boring <mark>important</mark> boring again.
</p>
which looks like:
Boring important boring again.
This is another extension I use heavily in my language notes to emphasize the important parts of grammar notes.
## Fancy maths
Now and then I do a post with some equations in. I could render these elsewhere, but I like to keep everything together for source control. Add to that, I want to render the maths statically to avoid client side rendering (and all the JS I'd have to include to do that).
I settled on another common markdown extension to do this, which is to embed LaTeX code. The previous extensions are all inline, whereas maths blocks are blocks. I use MathJax to render LaTeX within the maths blocks to SVG. The resultant SVG has some inline style and unnecessary attributes stripped out, and a title and title added with an aria-labelledby to point to the title for accessibility purposes.
This:
$$x = \frac{-b \pm \sqrt{b^2 - 4ac}}{2a}$$
Results in:
Nice, right? If you hover over it a tooltip will show you the original LaTeX code. I haven't figured out inline maths snippets yet.
1. sparingly |
Please use this identifier to cite or link to this item: http://arks.princeton.edu/ark:/88435/dsp010k225b06h
Title: Estimating the Employer Switching Costs and Wage Responses of Forward-Looking Engineers Authors: Fox, Jeremy T. Keywords: employer switchingswitching costsgeographic mobilityfirm compensation schemesmonopsony Issue Date: 1-Dec-2008 Series/Report no.: Working Papers (Princeton University. Industrial Relations Section) ; 543 Abstract: I estimate the relative magnitudes of worker switching costs and whether the employer switching of experienced engineers responds to outside wage offers. Institutional features imply that voluntary turnover dominates switching in the market for Swedish engineers from 1970–1990. I use data on the allocation of engineers across a large fraction of Swedish private sector firms to estimate the relative importance of employer wage policies and switching costs in a dynamic programming, discrete choice model of voluntary employer choice. The differentiated firms are modeled in employer characteristic space and each firm has its own agewage profile. I find that a majority of engineers have moderately high switching costs and that a minority of experienced workers are responsive to outside wage offers. Younger workers are more sensitive to outside wage offers than older workers. URI: http://arks.princeton.edu/ark:/88435/dsp010k225b06h Appears in Collections: IRS Working Papers
Files in This Item:
File Description SizeFormat |
# The last digit is my target
As shown below, the last digit of $n!$ for each of $n=5, 6, 7$ is zero: $5!=12{\color{#D61F06}0},\quad 6!=72{\color{#D61F06}0},\quad 7!=504{\color{#D61F06}0}.$ Is it true that the last digit of $n!$ is zero for all positive integers $n>4?$
× |
# How to offset multiple smoke instances by time?
I want to use pre-simulated smoke simulation assets from a library to avoid having to re-bake each time. For example, a large open scene with several smoke stacks rising off of a ground plane, I would simulate one smoke stack, bake the cache to an external file and then in my new scene file link same-sized cubes to the pre-baked caches and move and rotate them around to duplicate that particular smoke simulation. Is it possible to offset these by time so they're not exactly the same, or is it only possible with separate caches for each?
• ahhh... blender still has no this feature :( – mifth Sep 27 '14 at 11:24
## 1 Answer
If you look at the point cache files, the default filename follows the format: [identifier]_[frame]_[index].bphys. Unfortunately, the [frame] in the filename doesn't actually correspond to the cache's actual frame number.
That means, the only way to offset a cache's frames is by parsing the contents of each file (e.g. writing a python script). Apparently, modifying the contents is not that straightforward. This page explains a little (towards the bottom). Glancing at the blender source code, it looks like you'll need to edit the points time, dietime, and lifetime; some cache files may also store the previous cache point's time data too.
If you know C, It may be easier to just write a patch for Blender to add the feature. That way, you can make use of the read/write methods from Blender's source. You could also request the feature; perhaps there is a developer who is familiar with the code who can easily add it. |
Tag Info
0
The core mechanics of AlphaZero during selfplay and real tournament games are the same: something similar to Monte Carlo Tree Search is done but guided by the current neural network instead of random simulations. The network is only doing inference, it's not learning during a tree search. There's a great summary diagram here. The differences between selfplay ...
1
I have two suggestions that you can look into. Based on my own work in RL, I believe the first one will require less work to implement. If the observability of the environment is not an issue, then you could give the agent a relative measure (distance to the goal) as part of the observation to provide it with knowledge of how far away it is. You can also ...
2
Question 1: I don't think they ran AlphaGo or AlphaGoZero in training mode during tournament matches because the computing power required for this is really large. I don't recall if this is described in the documentary but see this quote from the AlphaZero paper (page 4): using 5,000 first-generation TPUs (15) to generate self-play games and 64 second-...
2
You sample according to the probability distribution $\pi(a \mid s, \theta)$, so you do not always take the action with the highest probability (otherwise there would be no exploration but just exploitation), but the most probable action should be sampled the most. However, keep in mind that the policy, $\theta$, changes, so also the probability distribution....
1
there should be absolutely no problem with training an agent on any available episode roll-out data. That is because a MDP implies for an any state S, the optimal action to take is entirely dependent on the state. The desired end-state of the trained model is that it can identify the optimal action. When comparing reinforcement learning (RL) methods, you ...
Top 50 recent answers are included |
# Effects of Reversing Entries SE 8. Assume that prior to the adjustments in SE 7, Salaries Expense
Effects of Reversing Entries
SE 8. Assume that prior to the adjustments in SE 7, Salaries Expense had a debit balance of $1,800 and Salaries Payable had a zero balance. Prepare a T account for each of these accounts. Enter the beginning balance; post the adjustment for accrued salaries, the appropriate closing entry, and the reversing entry; and enter the transaction in the T accounts for a payment of$480 for salaries on April 3.
## Plagiarism Checker
Submit your documents and get free Plagiarism report
Free Plagiarism Checker |
# Fast linear regression robust to outliers
I am dealing with linear data with outliers, some of which are at more the 5 standard deviations away from the estimated regression line. I'm looking for a linear regression technique that reduces the influence of these points.
So far what I did is to estimate the regression line with all the data, then discard the data point with very large squared residuals (say the top 10%) and repeated the regression without those points.
In the literature there are lots of possible approaches: least trimmed squares, quantile regression , m-estimators, etc. I really don't know which approach I should try, so I'm looking for suggestions. The important for me is that the chosen method should be fast because the robust regression will be computed at each step of an optimization routine. Thanks a lot!
-
A method that you did not mention is the use of Student-$t$ errors with unknown degrees of freedom. However, this may not be as fast as you need. – user10525 Dec 19 '12 at 11:37
@Procrastinator: (It's easy to imagine a configuration of outliers where) this will not work. – user603 Dec 19 '12 at 12:06
@user603 That is true for any method, there is no Panacea ;). I was simply pointing out another method. +1 to your answer. – user10525 Dec 19 '12 at 12:18
@Procrastinator: I agree that all methods will fail for some rate of contamination. And 'failure' in this context can be defined quantitatively and empirically. But the idea is to still favour those methods that will fail only at higher rates of contamination. – user603 Dec 19 '12 at 12:24
Since this is being done repeatedly during an optimization routine, perhaps the data in the regression are (eventually) changing slowly. This suggests an algorithm adapted to your situation: start with some form of robust regression, but when taking small steps during the optimization, simply assume in the next step that any previous outlier will remain an outlier. Use OLS on the data, then check whether the presumptive outliers are still outlying. If not, restart with the robust procedure, but if so--which might happen often--you will have saved a lot of computation. – whuber Dec 19 '12 at 16:38
If your data contains a single outlier, then it can be found reliably using the approach you suggest (without the iterations though). A formal approach to this is
Cook, R. Dennis (1979). Influential Observations in Linear Regression. Journal of the American Statistical Association (American Statistical Association) 74 (365): 169–174.
For finding more than one outlier, for many years, the leading method was the so-called $M$-estimation family of approach. This is a rather broad family of estimators that includes Huber's $M$ estimator of regression, Koenker's L1 regression as well as the approach proposed by Procastinator in his comment to your question. The $M$ estimators with convex $\rho$ functions have the advantage that they have about the same numerical complexity as a regular regression estimation. The big disadvantage is that they can only reliably find the outliers if:
• the contamination rate of your sample is smaller than $\frac{1}{1+p}$ where $p$ is the number of design variables,
• or if the outliers are not outlying in the design space (Ellis and Morgenthaler (1992)).
You can find good implementation of $M$ ($l_1$) estimates of regression in the robustbase (quantreg) R package.
If your data contains more than $\lfloor\frac{n}{p+1}\rfloor$ outlier potentially also outlying on the design space, then, finding them amounts to solving a combinatorial problem (equivalently the solution to an $M$ estimator with re-decending/non-convex $\rho$ function).
In the last 20 years (and specially last 10) a large body of fast and reliable outlier detection algorithms have been designed to approximately solve this combinatorial problem. These are now widely implemented in the most popular statistical packages (R, Matlab, SAS, STATA,...).
Nonetheless, the numerical complexity of finding outliers with these approaches is typically of order $O(2^p)$. Most algorithm can be used in practice for values of $p$ in the mid teens. Typically these algorithms are linear in $n$ (the number of observations) so the number of observation isn't an issue. A big advantage is that most of these algorithms are embarrassingly parallel. More recently, many approaches specifically designed for higher dimensional data have been proposed.
Given that you did not specify $p$ in your question, i will list some references for the case $p<20$. Here are some papers that explain this in greater details in these series of review articles:
Rousseeuw, P. J. and van Zomeren B.C. (1990). Unmasking Multivariate Outliers and Leverage Points. Journal of the American Statistical Association, Vol. 85, No. 411, pp. 633-639.
Rousseeuw, P.J. and Van Driessen, K. (2006). Computing LTS Regression for Large Data Sets. Data Mining and Knowledge Discovery archive Volume 12 Issue 1, Pages 29 - 45.
Hubert, M., Rousseeuw, P.J. and Van Aelst, S. (2008). High-Breakdown Robust Multivariate Methods. Statistical Science, Vol. 23, No. 1, 92–119
Leverage and Breakdown in L1 Regression Ellis S. P. and Morgenthaler S. (1992). Leverage and Breakdown in L1 RegressionJournal of the American Statistical Association , Vol. 87, No. 417, pp. 143-148
A recent reference book on the problem of outlier identification is:
Maronna R. A., Martin R. D. and Yohai V. J. (2006). Robust Statistics: Theory and Methods. Wiley, New York.
These (and many other variations of these) methods are implemented (among other) in the robustbase R package.
-
Now that is a great answer! – Peter Flom Dec 19 '12 at 12:43
Thanks a lot user603! In my problem $p < 10$ and there are no outliers in the design space (because the explanatory variables are simulated from a normal distribution). So maybe I can try with the m-estimator? In any case all the other references you have given me will be very useful once I will start working on more complex applications ($p$ >> 10) of my algorithm. – Matteo Fasiolo Dec 19 '12 at 13:40
@Jugurtha: In that case (no outlier in the design space and $p<10$) $M$ estimators are indeed the preferred solution. Consider the 'lmrob..M..fit' function in the robustbase package, the 'rlm' function in the MASS package or the l1 regression in the quantreg package. I would still also run the LTS-regression in a few case and compare the results, since they can withstand more outliers. I would do this just as a check of whether the contamination rate is not higher than you suspect. – user603 Dec 19 '12 at 13:47
Have you looked at RANSAC (Wikipedia)?
This should be good at computing a reasonable linear model even when there are a lot of outliers and noise, as it is built on the assumption that only part of the data will actually belong to the mechanism.
-
yea but adding a simple re-weighting step yields an estimator (LTS) that is equally robust and so much more stable and statistically efficient. Why not do? – user603 Apr 29 '13 at 1:13
For simple regression (single x), there's a lot to be said for the Theil-Sen line in terms of robustness to y-outliers and to influential points as well as generally good efficiency (at the normal) compared to LS for the slope. The breakdown point for the slope is nearly 30%; as long as the intercept (there are a variety of possible intercepts people have used) doesn't have a lower breakdown, the whole procedure copes with a sizable fraction of contamination quite well.
Its speed might sound like it would be bad - median of $\binom{n}{2}$ slopes looks to be $O(n^2)$ even with an $O(n)$ median - but my recollection is that it can be done more quickly if speed is really an issue ($O(n \log n)$, I believe)
Edit: user603 asked for an advantage of Theil regression over L1 regression (they have the same breakdown point on y-outliers). The answer is the other thing I mentioned - influential points:
The red line is the $L_1$ fit (from the function rq in the quantreg package). The green is a fit with a Theil slope. All it takes is a single typo in the x-value - like typing 533 instead of 53 - and this sort of thing happens. So the $L_1$ fit isn't robust to a single typo in the x-space.
-
it can indeed be computed in time $n\log n$. Could you elaborate on what advantage (in the single x case) the T-S estimator has over, say, $l_1$ regression? – user603 Mar 10 '13 at 9:39
@user603 See the edit. – Glen_b Mar 10 '13 at 11:28
(+1) thanks for the edit. It's important to point this feature out. – user603 Mar 11 '13 at 9:27
And what's the advantage over an MM-estimate, such as lmrob() from R package robustbase or even {no need to install anything but 'base R'} rlm(*, ... method="MM") from package MASS? These have full breakdown point (~ 50%) and are probably even more efficient at the normal. – Martin Mächler May 17 '13 at 12:41
@MartinMächler It seems like you're arguing against a claim I haven't made there. If you'd like to put up an answer which also contains a comparison of other high-breakdown robust estimators, especially ones that are roughly as simple to understand for someone at the level of the OP, I'd look forward to reading it. – Glen_b May 17 '13 at 23:05
I found the $l_1$ penalized error regression best. You can also use it iteratively and reweight samples, which are not very consistent with the solution. The basic idea is to augment your model with errors: $$y=Ax+e$$ where $e$ is the unknown error vector. Now you perform the regression on $$\parallel y-Ax-e \parallel_2^2+ \lambda \parallel e \parallel_1$$. Interestingly you can of course use "fused lasso" for this when you can estimate the certainty of your measurements in advance and put this as weighting into $$W=diag(w_i)$$ and to solve the new slighty different task $$\parallel y-Ax-e \parallel_2^2 + \lambda \parallel W e \parallel_1$$ |
• # AIS-BN
• Referenced in 25 articles [sw02223]
• algorithms, while an attractive alternative to exact algorithms in very large Bayesian network models, have ... unlikely evidence. To address this problem, we propose an adaptive importance sampling algorithm ... state of the art general purpose sampling algorithms, likelihood weighting and self-importance sampling ... network, with evidence as unlikely as $10^-41$. While the AIS-BN algorithm always performed...
• # MultiNest
• Referenced in 36 articles [sw10481]
• nested sampling algorithm, called MultiNest. This Bayesian inference tool calculates the evidence, with an associated ... robustness, as compared to the original algorithm presented in Feroz & Hobson, which itself significantly outperformed...
• # SubChlo
• Referenced in 11 articles [sw22431]
• composition and the evidence-theoretic K-nearest neighbor (ET-KNN) algorithm. The chloroplast ... introducing the evidence-theoretic K-nearest neighbor (ET-KNN) algorithm, we developed a method...
• # BLOG
• Referenced in 41 articles [sw22025]
• unbounded numbers of objects. Furthermore, complete inference algorithms exist for a large fragment ... probabilistic form of Skolemization for handling evidence...
• # KronFit
• Referenced in 41 articles [sw20428]
• they do so. We also provide empirical evidence showing that Kronecker graphs can effectively model ... then present KRONFIT, a fast and scalable algorithm for fitting the Kronecker graph generation model...
• # dynesty
• Referenced in 6 articles [sw28387]
• Python package to estimate Bayesian posteriors and evidences (marginal likelihoods) using Dynamic Nested Sampling ... benefits of Markov Chain Monte Carlo algorithms that focus exclusively on posterior estimation while retaining ... Nested Sampling’s ability to estimate evidences and sample from complex, multi-modal distributions ... extension to Dynamic Nested Sampling, the algorithmic challenges involved, and the various approaches taken...
• # DDA
• Referenced in 2 articles [sw29479]
• algorithm, extracted from previous work with evolutionary algorithms, that takes as input a finite groupoid ... give theoretical and experimental evidence that this algorithm is successful for all idemprimal term continuous...
• # GeneWise
• Referenced in 5 articles [sw29729]
• combination of hidden Markov models (HMMs). Both algorithms are highly accurate and can provide both ... gene structures when used with the correct evidence...
• # ABC-SubSim
• Referenced in 12 articles [sw10099]
• proposed algorithm outperforms other recent sequential ABC algorithms in terms of computational efficiency while achieving ... SubSim readily provides an estimate of the evidence (marginal likelihood) for posterior model class assessment...
• # PGSL
• Referenced in 11 articles [sw04748]
• performs better than genetic algorithms and advanced algorithms for simulated annealing ... increasingly better than these other approaches. Empirical evidence of the convergence of PGSL is provided... |
• 1 Vote(s) - 5 Average
• 1
• 2
• 3
• 4
• 5
Could be tetration if this integral converges JmsNxn Long Time Fellow Posts: 571 Threads: 95 Joined: Dec 2010 05/04/2014, 09:06 PM (This post was last modified: 05/04/2014, 09:07 PM by JmsNxn.) As a note on a similar technique you are applying Mike but trying to keep the vibe much more fractional calculus'y (since it's what I am familiar with) We will try the following function: $0 < \lambda$ We want $\lambda$ fairly small $\beta(w) = \sum_{n=0}^\infty \frac{w^n}{n!(^n e)}$ We know that $|\beta(w)| < C e^{\kappa |w|}$ for $\kappa > 0$ because $\frac{1}{(^n e)} < C_\kappa \kappa^n$ So that $\Re(z) > 0$. $F(-z) = \frac{1}{\Gamma(z)}\int_0^\infty e^{-\lambda x}\beta(-x)x^{z-1} \,dx$ This function should be smaller then tetration at natural values. $F(n) = \sum_{j=0}^n \frac{n!(-\lambda)^{n-j}}{j!(n-j)!(^j e)}$ We would get the entire expression for $F(z)$ by Lemma 3 of my paper: $F(z) = \frac{1}{\Gamma(-z)}(\sum_{n=0}^\infty F(n)\frac{(-1)^n}{n!(n-z)} + \int_1^\infty e^{-\lambda x} \beta(-x)x^{z-1}\,dx)$ Now F(z) will be susceptible to alot of the techniques I have in my belt involving fractional calculus. This Idea just popped into my head but I'm thinking working with a function like this will pull down the imaginary behaviour and pull down the real behaviour. We also note that $e^{F(z) }\approx F(z+1)$. Which again will be more obvious if you look at the paper, but it basically follows because: $F(n) \approx (^n e)$ « Next Oldest | Next Newest »
Messages In This Thread Could be tetration if this integral converges - by JmsNxn - 04/03/2014, 02:14 PM RE: Could be tetration if this integral converges - by sheldonison - 04/30/2014, 11:17 AM RE: Could be tetration if this integral converges - by JmsNxn - 05/02/2014, 03:33 PM RE: Could be tetration if this integral converges - by tommy1729 - 04/30/2014, 12:29 PM RE: Could be tetration if this integral converges - by mike3 - 05/03/2014, 01:19 AM RE: Could be tetration if this integral converges - by tommy1729 - 05/11/2014, 04:26 PM RE: Could be tetration if this integral converges - by JmsNxn - 05/11/2014, 04:30 PM RE: Could be tetration if this integral converges - by tommy1729 - 05/11/2014, 04:52 PM RE: Could be tetration if this integral converges - by mike3 - 05/03/2014, 05:24 AM RE: Could be tetration if this integral converges - by mike3 - 05/03/2014, 07:13 AM RE: Could be tetration if this integral converges - by JmsNxn - 05/03/2014, 06:12 PM RE: Could be tetration if this integral converges - by mike3 - 05/04/2014, 02:18 AM RE: Could be tetration if this integral converges - by JmsNxn - 05/04/2014, 07:27 PM RE: Could be tetration if this integral converges - by mike3 - 05/05/2014, 12:55 AM RE: Could be tetration if this integral converges - by mike3 - 05/04/2014, 11:50 AM RE: Could be tetration if this integral converges - by sheldonison - 05/04/2014, 03:28 PM RE: Could be tetration if this integral converges - by mike3 - 05/05/2014, 01:00 AM RE: Could be tetration if this integral converges - by sheldonison - 05/05/2014, 03:49 PM RE: Could be tetration if this integral converges - by tommy1729 - 05/04/2014, 01:25 PM RE: Could be tetration if this integral converges - by JmsNxn - 05/04/2014, 07:36 PM RE: Could be tetration if this integral converges - by MphLee - 05/04/2014, 07:44 PM RE: Could be tetration if this integral converges - by mike3 - 05/04/2014, 10:42 PM RE: Could be tetration if this integral converges - by JmsNxn - 05/04/2014, 11:32 PM RE: Could be tetration if this integral converges - by JmsNxn - 05/04/2014, 09:06 PM RE: Could be tetration if this integral converges - by mike3 - 05/05/2014, 02:11 AM RE: Could be tetration if this integral converges - by JmsNxn - 05/05/2014, 04:27 PM RE: Could be tetration if this integral converges - by mike3 - 05/05/2014, 11:45 PM RE: Could be tetration if this integral converges - by JmsNxn - 05/06/2014, 12:11 AM RE: Could be tetration if this integral converges - by mike3 - 05/06/2014, 06:50 AM RE: Could be tetration if this integral converges - by JmsNxn - 05/06/2014, 03:54 PM RE: Could be tetration if this integral converges - by mike3 - 05/07/2014, 03:25 AM RE: Could be tetration if this integral converges - by JmsNxn - 05/07/2014, 03:18 PM RE: Could be tetration if this integral converges - by mike3 - 05/11/2014, 07:47 AM RE: Could be tetration if this integral converges - by JmsNxn - 05/11/2014, 04:29 PM RE: Could be tetration if this integral converges - by mike3 - 05/11/2014, 11:26 PM RE: Could be tetration if this integral converges - by JmsNxn - 05/12/2014, 01:44 AM RE: Could be tetration if this integral converges - by mike3 - 05/12/2014, 02:15 AM RE: Could be tetration if this integral converges - by JmsNxn - 05/12/2014, 03:32 PM RE: Could be tetration if this integral converges - by tommy1729 - 05/12/2014, 11:26 PM RE: Could be tetration if this integral converges - by JmsNxn - 05/13/2014, 01:58 PM RE: Could be tetration if this integral converges - by JmsNxn - 05/05/2014, 06:18 PM RE: Could be tetration if this integral converges - by tommy1729 - 05/05/2014, 09:09 PM
Possibly Related Threads... Thread Author Replies Views Last Post Where is the proof of a generalized integral for integer heights? Chenjesu 2 4,396 03/03/2019, 08:55 AM Last Post: Chenjesu [integral] How to integrate a fourier series ? tommy1729 1 4,910 05/04/2014, 03:19 PM Last Post: tommy1729 Some integral transforms related to tetration JmsNxn 0 3,346 05/02/2013, 07:54 PM Last Post: JmsNxn (draft) integral idea tommy1729 0 3,939 06/25/2011, 10:17 PM Last Post: tommy1729 Cauchy integral also for b< e^(1/e)? bo198214 14 23,512 04/24/2009, 05:29 PM Last Post: bo198214
Users browsing this thread: 1 Guest(s) |
## Motivation
Most supervised machine learning algorithms work in the batch setting, whereby they are fitted on a training set offline, and are used to predict the outcomes of new samples. The only way for batch machine learning algorithms to learn from new samples is to train them from scratch with both the old samples and the new ones. Meanwhile, some learning algorithms are online, and can predict as well as update themselves when new samples are available. This encompasses any model trained with stochastic gradient descent – which includes deep neural networks, factorisation machines, and SVMs – as well as decision trees, metric learning, and naïve Bayes.
Online models are usually weaker than batch models when trained on the same amount of data. However, this discrepancy tends to get smaller as the size of the training data increases. Researchers try to build online models that are guaranteed to reach the same performance as a batch model when the size of the data grows – they call this convergence. But comparing online models to batch models isn’t really fair, because they’re not meant to solve the same problems.
Batch models are meant to be used when you can afford to retrain your model from scratch every so often. Online models, on the contrary, are meant to be used when you want your model to learn from a stream of data, and therefore never have to restart from scratch. Learning from a stream of data is something a batch model can’t do, and is very much different to the usual train/test split paradigm that machine learning practitioners are used to. In fact, there are other ways to evaluate the performance of an online model that make more sense than, say, cross-validation.
## Cross-validation
To begin with, I’m going to compare scikit-learn’s SGDRegressor and Ridge. In a nutshell, SGDRegressor has the same model parameters as a Ridge, but is trained via stochastic gradient descent, and can thus learn from a stream of data. In practice this happens via the partial_fit method. Both can be seen as linear regression with some L2 regularisation thrown into the mix. Note that scikit-learn provides a list of it’s estimators that support “incremental learning”, which is a synonym of online learning.
As a running example in this blog post, I’m going to be using the New-York City taxi trip duration dataset from Kaggle. This dataset contains 6 months of taxi trips and is a perfect usecase for online learning. We’ll start by loading the data and tidying it up a bit:
import pandas as pd
'nyc_taxis/train.csv',
parse_dates=['pickup_datetime', 'dropoff_datetime'],
index_col='id',
dtype={'vendor_id': 'category', 'store_and_fwd_flag': 'category'}
)
taxis = taxis.rename(columns={
'pickup_longitude': 'pickup_lon',
'dropoff_longitude': 'dropoff_lon',
'pickup_latitude': 'pickup_lat',
'dropoff_latitude': 'dropoff_lat'
})
idvendor_idpickup_datetimedropoff_datetimepassenger_countpickup_lonpickup_latdropoff_londropoff_latstore_and_fwd_flagtrip_durationl1_distl2_distdayweekdayhour
id019046922016-01-01 00:00:172016-01-01 00:14:265-73.981740.7192-73.938840.8292N8490.1529390.118097140
id166558612016-01-01 00:00:532016-01-01 00:22:271-73.985140.7472-73.95840.7175N12940.05672070.0401507140
id121036522016-01-01 00:01:012016-01-01 00:07:495-73.965340.801-73.947540.8152N4080.0319290.0227259140
id388827912016-01-01 00:01:142016-01-01 00:05:541-73.982340.7513-73.991340.7503N2800.01004030.00910266140
id092422712016-01-01 00:01:202016-01-01 00:13:361-73.970140.7598-73.989440.743N7360.03606030.0255567140
The dataset contains a few anomalies, such as trips that last an very large amount of time. For simplicity we’ll only consider the trips that last under an hour, which is the case for over 99% of the them.
taxis.query('trip_duration < 3600', inplace=True)
Now let’s add a few features.
# Distances
taxis['l1_dist'] = taxis.eval('abs(pickup_lon - dropoff_lon) + abs(pickup_lat - dropoff_lat)')
taxis['l2_dist'] = taxis.eval('sqrt((pickup_lon - dropoff_lon) ** 2 + (pickup_lat - dropoff_lat) ** 2)')
# The usual suspects
taxis['day'] = taxis['pickup_datetime'].dt.day
taxis['weekday'] = taxis['pickup_datetime'].dt.weekday
taxis['hour'] = taxis['pickup_datetime'].dt.hour
Cross-validation is a well-known machine learning technique, so allow me not to disgress on it. The specifity of our case is that our observations have timestamps. Therefore, performing a cross-sampling with folds chosen at random is a mistake. Indeed, if our goal is to get a faithful idea of the performance of our model for future data, then we need to take into account the temporal aspect of the data. For more material on this, I recommend reading this research paper and this CrossValidated thread. To keep things simple, we can split our dataset in two. The test set will be the last month in our dataset, which is June, whilst the training set will contain all the months before that. This isn’t cross-validation per say, but what matters here is the general idea.
from sklearn import preprocessing
is_test = taxis['pickup_datetime'].dt.month == 6 # i.e. the month of June
not_features = [
'vendor_id', 'pickup_datetime', 'dropoff_datetime',
'store_and_fwd_flag', 'trip_duration'
]
X = taxis.drop(columns=not_features)
X[:] = preprocessing.scale(X)
y = taxis['trip_duration']
X_train = X[~is_test]
y_train = y[~is_test]
X_test = X[is_test]
y_test = y[is_test]
Now obtaining a performance score for a batch model is simple: we train it on the training set and we make predictions on the test set. In our case we’ll calculate the mean absolute error because this implies that the error will be measured in seconds.
from sklearn import linear_model
from sklearn import metrics
lin_reg = linear_model.Ridge()
lin_reg.fit(X_train, y_train)
y_pred = lin_reg.predict(X_test)
score = metrics.mean_absolute_error(y_test, y_pred)
As for the SGDRegressor, we can also train it on the whole training set and evaluate it on the test set. However, we can also train it incrementally by batching the training set. For instance, we can split the training set in 5 cunks and call partial_fit on each chunk. We can therefore see much the amount of training data affects the performance on the test set. Note that we choose 5 because this is equivalent to the number of months in the training set.
import numpy as np
sgd = linear_model.SGDRegressor(
learning_rate='constant',
eta0=0.01,
random_state=42
)
n_rows = 0
scores = {}
for X_chunk in np.array_split(X_train.iloc[::-1], 5):
y_chunk = y_train.loc[X_chunk.index]
sgd = sgd.partial_fit(X_chunk, y_chunk)
y_pred = sgd.predict(X_test)
n_rows += len(X_chunk)
scores[n_rows] = metrics.mean_absolute_error(y_test, y_pred)
Let’s see how this looks on a chart.
Click to see the code
fig, ax = plt.subplots(figsize=(14, 8))
ax.axhline(score, label='Batch linear regression')
ax.scatter(
list(scores.keys()),
list(scores.values()),
label='Incremental linear regression'
)
ax.legend(loc='lower center')
ax.set_ylim(0, score * 1.1)
ax.ticklabel_format(style='sci', axis='x', scilimits=(0, 0))
ax.grid()
ax.set_title('Batch vs. incremental linear regression', pad=16)
fig.savefig('batch_vs_incremental.svg', bbox_inches='tight')
As we can see, both models seem to be performing just as well. The average error is just north of 5 minutes. It seems that the amount of data doesn’t have too much of an impact on performance. But this isn’t telling the whole story.
## Progressive validation
In the case of online learning, the shortcoming of cross-validation is that it doesn’t faithfully reproduce the steps that the model will undergo. Cross-validation assumes that the model is trained once and remains static from thereon. However, an online model keeps learning, and can make predictions at any point in it’s lifetime. Remember, our goal is to obtain a measure of how well the model would perform in a production environment. Cross-validation will produce a proxy of this measure, but we can do even better.
In the case of online machine learning, we have another validation tool at our disposal called progressive validation. In an online setting, observations arrive from a stream in sequential order. Each observation can be denoted as $(x_i, y_i)$, where $x_i$ is a set of features, $y_i$ is a label, and $i$ is used to denote time (i.e., it can be an integer or a timestamp). Before updating the model with the pair $(x_i, y_i)$, we can ask the model to predict the output of $x_i$, and thus obtain $\hat{y}_i$. We can then update a live metric by providing it with $y_i$ and $\hat{y}_i$. Indeed, common metrics such as accuracy, MSE, and ROC AUC are all sums and can thus be updated online. By doing so, the model is trained with all the data in a single pass, and all the data is as well used as a validation set. Think about that, because it’s quite a powerful idea. Moreover, the data is processed in the order in which it arrives, which means that it is virtually impossible to introduce data leakage – including target leakage.
Let’s apply progressive validation to our SGDRegressor on the whole taxi trips dataset. I encourage you to go through the code because it’s quite self-explanatory. I’ve added comments to separate the sequences of steps that are performed. In short these are: 1) get the next sample, 2) make a prediction, 3) update a running average of the error, 4) update the model. An exponentially weighted average of the MAE is stored in addition to the overall running average. This allows to get an idea of the recent performance of the model at every point in time. To keep things clear in the resulting chart, I’ve limited the number of samples to 38,000, which roughly corresponds to a week of data.
from sklearn import exceptions
sgd = linear_model.SGDRegressor(
learning_rate='constant',
eta0=0.01,
random_state=42
)
scores = []
exp_scores = []
running_mae = 0
exp_mae = 0
X_y = zip(X.to_numpy(), y.to_numpy())
dates = taxis['pickup_datetime'].to_numpy()
for i, date in enumerate(dates, start=1):
xi, yi = next(X_y)
# Make a prediction before the model learns
try:
y_pred = sgd.predict([xi])[0]
except exceptions.NotFittedError: # happens if partial_fit hasn't been called yet
y_pred = 0.
# Update the running mean absolute error
mae = abs(y_pred - yi)
running_mae += (mae - running_mae) / i
# Update the exponential moving average of the MAE
exp_mae = .1 * mae + .9 * exp_mae
# Store the metric at the current time
if i >= 10:
scores.append((date, running_mae))
exp_scores.append((date, exp_mae))
# Finally, make the model learn
sgd.partial_fit([xi], [yi])
if i == 38000:
break
Now let’s how this looks:
Click to see the code
import matplotlib.dates as mdates
fig, ax = plt.subplots(figsize=(14, 8))
hours = mdates.HourLocator(interval=8)
h_fmt = mdates.DateFormatter('%A %H:%M')
ax.plot(
[d for d, _ in scores],
[s for _, s in scores],
linewidth=3,
label='Running average',
alpha=.7
)
ax.plot(
[d for d, _ in exp_scores],
[s for _, s in exp_scores],
linewidth=.3,
label='Exponential moving average',
alpha=.7
)
ax.legend()
ax.set_ylim(0, 600)
ax.xaxis.set_major_locator(hours)
ax.xaxis.set_major_formatter(h_fmt)
fig.autofmt_xdate()
ax.grid()
fig.savefig('progressive_validation.svg', bbox_inches='tight')
There are two interesting things to notice. First of all, the average performance of the online model is around 200 seconds, which is better than when using cross-validation. This should make sense, because in this online paradigm the model gets to learn every time a sample arrives, whereas previously it was static. You could potentially obtain the same performance with a batch model, but you would need to retrain it from scratch every time a sample arrives. At the very least it would have to be retrained as frequently as possible. The other thing to notice is that the performance of the model seems to oscillate periodically. This could mean that there is an underlying seasonality that the model is not capturing. It could also mean that the variance of the durations changes along time. In fact, this can be verified by looking at the average and the variance of the trip durations per hour of the day.
Click to see the code
agg = (
taxis.assign(hour=taxis.pickup_datetime.dt.hour)
.groupby('hour')['trip_duration']
.agg(['mean', 'std'])
)
fig, ax = plt.subplots(figsize=(14, 8))
color = 'tab:red'
agg['mean'].plot(ax=ax, color=color)
ax.set_ylim(0, 1000)
ax2 = ax.twinx()
color = 'tab:blue'
agg['std'].plot(ax=ax2, color=color)
ax2.set_ylabel('Trip duration standard deviation', labelpad=10, color=color)
ax2.set_ylim(0, 750)
ax.grid()
ax.set_xticks(range(24))
ax.set_title('Trip duration distribution per hour', pad=16)
fig.savefig('hourly_averages.svg', bbox_inches='tight')
We can see that there is much more variance for trips that depart at the beginning of the afternoon than there is for those that occur at night. There are many potential explanations, but that isn’t the topic of this blog post. The above chart just helps to explain where the cyclicity in the model’s performance is coming from.
In an online setting, progressive validation is a natural method and is often used in practice. For instance, it is mentioned in subsection 5.1 of Ad Click Prediction: a View from the Trenches. In this paper, writter by Google researchers, they use progressive validation to evaluate an ad click–through rate (CTR) model. The authors of the paper remark that models which are based on the gradient of a loss function require computing a prediction anyway, in which case progressive validation can essentially be performed for free. Progressive validation is appealing because it attempts to simulate a live environment wherein the model has to predict the outcome of $x_i$ before the ground truth $y_i$ is made available. For instance, in the case of a CTR task, the label $y_i \in {0, 1}$ is available once the user has clicked on the ad (i.e., $y_i = 1$), has navigated to another page (i.e., $y_i = 0$), or a given amount of time has passed (i.e., $y_i = 0$). Indeed, in a live environment, there is a delay between the query (i.e., predicting the outcome of $x_i$) and the answer (i.e., when $y_i$ is revealed to the model). Note that machine learning is used in the first place because we want to guess $y_i$ before it happens. The larger the delay, the lesser the chance that $x_i$ and $y_i$ will arrive in perfect sequence. Before $y_i$ is made available, any of $x_{i+1}, x_{i+2}, \dots$ could potentially arrive and require predictions to be made. However, when using progressive validation, we implicitly assume that there is no delay. In other words the model has access to $y_i$ immediately after having produced $\hat{y}_i$. Therefore, progressive validation is not necessarily a faithful simulation of a live environment. In fact progressive validation is overly optimistic when the data contains seasonal patterns.
## Delayed progressive validation
In a CTR task, the delay between the query $x_i$ and the answer $y_i$ is usually quite small and can be measured in seconds. However, for other tasks, the gap can be quite large because of the nature of the problem. If a model predicts the duration of a taxi trip, then obviously the duration of the trip, is only known once the taxi arrives at the desired destination. However, when using progressive validation, the model is given access to the true duration right after it has made a prediction. If the model is then asked to predict the duration of another trip which departs at a similar time as the previous trip, then it will be cheating because it knows how long the previous trip lasts. In a live environment this situation can’t occur because the future is obviously unknown. However, in a local environment this kind of leakage can occur if one is not careful. To accurately simulate a live environment and thus get a reliable estimate of the performance of a model, we thus need to take into account the delay in arrival times between $x_i$ and $y_i$. The problem with progressive validation is that it doesn’t take said delay into account.
The gold standard is to have a log file with the arrival times of each set of features $x_i$ and each outcome $y_i$. We can then ask the model to make a prediction when $x_i$ arrives, and update itself with $(x_i, y_i)$ once $y_i$ is available. In a fraud detection system for credit card transactions, $x_i$ would contain details about the transaction, whilst $y_i$ would be made available once a human expert has confirmed the transaction as fraudulent or not. However, a log file might not always be available. Indeed, most of the time datasets do not indicate the times at which both the features and the targets arrived.
A method for alleviating this issue is called “delayed progressive validation”. I’ve added quotes because I actually coined it myself. The short story is that I wanted to publish a paper on the topic. A short while after, during an exchange with Albert Bifet, he told me that his team had very recently published a paper on the topic. I cursed a tiny bit and decided to write a blog post instead!
Delayed progressive validation is quite intuitive. Instead of updating the model immediately after it has made a prediction, the idea is to update it once the ground truth would be available. This way the model learns and predicts samples without leakage. To do so, we can pick a delay $d > 0$ and append a quadruplet $(i + d, x_i, y_i, \hat{y}_i)$ into a sorted list which we’ll call $Q$. Once we reach $i + d$, the model is given access to $y_i$ and can therefore be updated, whilst the metric can be updated with $y_i$ and $\hat{y}_i$. We can check if we’ve reached $i + d$ every time a new observation comes in.
For various reasons you might not be able to assign an exact value to $d$. The nice thing is that $d$ can be anything you like, and doesn’t necessarily have to be the same for every observation. This provides the flexibility of either using a constant, a random variable, or a value that depends on one or more attributes of $x_i$. For instance, in a credit card fraud detection task, it might be that the delay varies according to the credit card issuer. In the case of taxi trips, $d$ is nothing more than the duration of the trip.
Initially, $Q$ is empty, and grows every time the model makes a prediction. Once the next observation arrives, we loop over $Q$ in insertion order. For each quadruplet in $Q$ which is old enough, we update the model and the metric, before removing the quadruplet from $Q$. Because $Q$ is ordered, we can break the loop over $Q$ whenever a quadruplet is not old enough. Once we have depleted the stream of data, $Q$ will still contain some quadruplets, and so the final step of the procedure is to update the metric with the remaining quadruplets.
On the one hand, delayed progressive validation will perform as many predictions and model updates as progressive validation. Indeed, every observation is used once for making prediction, and once for updating the model. The only added cost comes from inserting $(x_i, y_i, \hat{y}_i, i + d)$ into $Q$ so that $Q$ remains sorted. This can be done in $\mathcal{O}(log(|Q|))$ time by using the bisection method, with $|Q|$ being the length of $Q$. In the special case where the delay $d$ is constant, the bisection method can be avoided because each quadruplet $(x_i, y_i, \hat{y}_i, i + d)$ can simply be inserted at the beginning of $Q$. Other operations, namely comparing timestamps and picking a delay, are trivial. On the other hand, the space complexity is higher than progressive validation because $Q$ has to be maintained in memory.
Naturally, the size of the queue is proportional to the delay. For most cases this shouldn’t be an issue because the observations are being processed one at a time, which means that quadruplets are added and dropped from the queue at a very similar rate. You can also place an upper bound on the expected size of $Q$ by looking at the average value of $d$ and the arrival rate, but we’ll skip that for the time being. In practice, if the amount of available memory runs out, then $Q$ can be written out to the disk, but this is very much an edge case. Finally, note that progressive validation can be seen as a special case of delayed progressive validation when the delay is set to 0. Indeed, in this case $Q$ will contain at most one element, whilst the predictions and model updates will be perfectly interleaved.
Let’s go about implementing this. We’ll use Python’s bisect module to insert quadruplets into the queue. Each quadruplet is a tuple that stands for a trip. We place the arrival date at the start of the tuple in order to be able to compare trips according to their arrival time. Indeed, Python compares tuples position by position, as explained in this StackOverflow post.
import bisect
import datetime as dt
def simulate_qa(X_y, departure_dates):
trips_in_progress = []
for i, departure_date in enumerate(departure_dates, start=1):
# Go through the trips and progress and check if they're finished
while trips_in_progress:
trip = trips_in_progress[0]
arrival_date = trip[0]
if arrival_date < departure_date:
yield trip
del trips_in_progress[0]
continue
break
xi, yi = next(X_y)
# Show the features, hide the target
yield departure_date, i, xi, None
# Store the trip for later use
arrival_date = departure_date + dt.timedelta(seconds=int(yi))
trip = (arrival_date, i, xi, yi)
bisect.insort(trips_in_progress, trip)
# Terminate the rest of the trips in progress
yield from trips_in_progress
To differentiate between departures and arrivals, we’re yielding a trip with a duration set to None. In other words, a None value implicitely signals a taxi departure which requires a prediction. This also avoids any leakage concerns that may occur. Let’s do a quick sanity check to verify that our implementation behaves correctly. The following example also helps to understand and visualise what the above implementation is doing.
time_table = [
(dt.datetime(2020, 1, 1, 20, 0, 0), 900),
(dt.datetime(2020, 1, 1, 20, 10, 0), 1800),
(dt.datetime(2020, 1, 1, 20, 20, 0), 300),
(dt.datetime(2020, 1, 1, 20, 45, 0), 400),
(dt.datetime(2020, 1, 1, 20, 50, 0), 240),
(dt.datetime(2020, 1, 1, 20, 55, 0), 450)
]
X_y = ((None, duration) for _, duration in time_table)
departure_dates = (date for date, _ in time_table)
for date, i, xi, yi in simulate_qa(X_y, departure_dates):
if yi is None:
print(f'{date} - trip #{i} departs')
else:
print(f'{date} - trip #{i} arrives after {yi} seconds')
2020-01-01 20:00:00 - trip #1 departs
2020-01-01 20:10:00 - trip #2 departs
2020-01-01 20:15:00 - trip #1 arrives after 900 seconds
2020-01-01 20:20:00 - trip #3 departs
2020-01-01 20:25:00 - trip #3 arrives after 300 seconds
2020-01-01 20:40:00 - trip #2 arrives after 1800 seconds
2020-01-01 20:45:00 - trip #4 departs
2020-01-01 20:50:00 - trip #5 departs
2020-01-01 20:51:40 - trip #4 arrives after 400 seconds
2020-01-01 20:54:00 - trip #5 arrives after 240 seconds
2020-01-01 20:55:00 - trip #6 departs
2020-01-01 21:02:30 - trip #6 arrives after 450 seconds
Now let’s re-evaluate our model with delayed progressive cross-validation. There is very little we have to modify in the existing evaluation code. The biggest change is that we need to store the predictions while we wait for their associated ground truths to be available. We can release each prediction from memory once the relevant ground truth arrives – i.e. a taxi arrives.
from sklearn import linear_model
sgd = linear_model.SGDRegressor(
learning_rate='constant',
eta0=0.01,
random_state=42
)
scores = []
exp_scores = []
running_mae = 0
exp_mae = 0
X_y = zip(X.to_numpy(), y.to_numpy())
departure_dates = taxis['pickup_datetime']
trips = simulate_qa(X_y, departure_dates)
predictions = {}
n_preds = 0
for date, trip_id, xi, yi in trips:
if yi is None:
# Make a prediction
try:
y_pred = sgd.predict([xi])[0]
except exceptions.NotFittedError: # happens if partial_fit hasn't been called yet
y_pred = 0.
predictions[trip_id] = y_pred
continue
# Update the running mean absolute error
y_pred = predictions.pop(trip_id)
mae = abs(y_pred - yi)
n_preds += 1
running_mae += (mae - running_mae) / n_preds
# Update the exponential moving average of the MAE
exp_mae = .1 * mae + .9 * exp_mae
# Store the metric at the current time
if trip_id >= 10:
scores.append((date, running_mae))
exp_scores.append((date, exp_mae))
# Finally, make the model learn
sgd.partial_fit([xi], [yi])
if n_preds == 38000:
break
I agree that the code can seem a bit verbose. However, it’s very easy to generalise and the logic can be encapsulated in a higher-level function, including the simulate_qa function. In fact, the creme library has a progressive_val_score function in it’s model_selection module that does just that. Now let’s see what that the performance looks like on a chart.
Click to see the code
fig, ax = plt.subplots(figsize=(14, 8))
hours = mdates.HourLocator(interval=8)
h_fmt = mdates.DateFormatter('%A %H:%M')
ax.plot(
[d.to_datetime64() for d, _ in scores],
[s for _, s in scores],
linewidth=3,
label='Running average',
alpha=.7
)
ax.plot(
[d.to_datetime64() for d, _ in exp_scores],
[s for _, s in exp_scores],
linewidth=.3,
label='Exponential moving average',
alpha=.7
)
ax.legend()
ax.set_ylim(0, 600)
ax.xaxis.set_major_locator(hours)
ax.xaxis.set_major_formatter(h_fmt)
fig.autofmt_xdate()
ax.grid() |
# Conditions on matrices imply that 3 divides $n$
A quite popular exercise in linear algebra is the following (or very related exercises, see for example https://math.stackexchange.com/questions/299651/square-matrices-satisfying-certain-relations-must-have-dimension-divisible-by-3 and https://math.stackexchange.com/questions/3109173/ab-ba-invertible-and-a2b2-ab-then-3-divides-n):
Let $$K$$ be a field of characteristic different from 3 and $$X$$ and $$Y$$ two $$n\times n$$-matrices with $$X^2+Y^2+XY=0$$ and $$XY-YX$$ invertible. Then 3 divides $$n$$.
A (representation-theoretic) proof can be given as in the answer of Mariano Suárez-Álvarez in https://math.stackexchange.com/questions/299651/square-matrices-satisfying-certain-relations-must-have-dimension-divisible-by-3 .
Question: Is this also true for fields of characteristic 3?
edit: So it turned out that the result holds for any field. A bonus question might be to find a proof that works independent of the characteristic of the field.
• A simple observation in characteristic 3 is that the hp is equivalent to (X-Y)^2 = [X,Y] invertible. So in particular X = M +Y where M is invertible. Substituting M yields M^2 = [M,Y] for some Y. This is a bit stronger then tr M^2 = 0. – Andrea Marino Jun 3 '19 at 18:19
• Following Andrea's simplification. By multiplying both on the left and right by $M^{-1}$ we get $I=[Y,M^{-1}]$. Taking traces of both sides gives $n=0$, which means $3|n$ since we are in characteristic 3. – Gjergji Zaimi Jun 3 '19 at 19:27
• This is a very beautiful solution. It would be nice to turn it into an answer. – Mare Jun 3 '19 at 19:56
• Great job Gjergji! We did it. Who's going to write it? – Andrea Marino Jun 3 '19 at 21:06
• @LSpice - no it is not required. The other answer next to Mariano's essentially works just fine whenever there exists a primitive cube root of unity. – Vladimir Dotsenko Jun 4 '19 at 16:26
Assembling the comments, I write the entire solution.
STEP 1: the first hypothesis in characteristics 3 is equivalent to $$(X-Y)^2 = [X,Y]$$.
Indeed, note that
$$(X-Y)^2 - [X,Y] = X^2-XY-YX+Y^2- XY+YX = X^2-2XY+Y^2 = X^2+Y^2+XY$$
Why did I make this calculation? Note that $$X^2+Y^2+XY$$ resembles a factor of $$X^3-Y^3$$, which would be equal (in commutative case) to $$(X-Y)^3$$ in char 3. To get there we would need a $$X-Y$$ more, so we can guess that $$(X-Y)^2$$ and $$X^2+Y^2+XY$$ will be equal up to some commutators!
STEP 2: getting the final result.
We have that $$(X-Y)^2 = [X,Y]$$ is invertible by the second hypothesis. Thus in particular $$X-Y$$ is invertible. This allows us to substitute $$X=M+Y$$ with M invertible, obtaining
$$M^2 = [M+Y,Y] = [M,Y]$$
The only relevant information we have about arbitrary commutators is that the trace is zero. We hence would like to have some matrix with easy trace (like the identity) to compare witha commutator. To do this, let's multiplicate on both the left and the right for $$M^{-1}$$:
$$I = M^{-1}[M,Y]M^{-1} = M^{-1}(MY-YM)M^{-1} = YM^{-1}-M^{-1}Y = [Y,M^{-1}]$$ Taking traces we get $$n=0$$. Being in characteristics 3, this gives $$3 \mid n$$.
Let me start from $$M^2=[M,Y]$$ with $$M\in {\bf GL}_n(k)$$, as suggested by Andrea. Wlog, I assume that the characteristic polynomial of $$M$$ splits over $$k$$, and I decompose $$k^n$$ as the direct sum of characteristic subspaces $$E_\mu=\ker(M-\mu I_n)^n$$. It is enough to prove that the dimension of each $$E_\mu$$ is a multiple of $$3$$. To this end, observe that $$Y$$ acts over $$E_\mu$$. In details, let $$x$$ be an eigenvector, $$Mx=\mu x$$. Then $$(M-\mu)Yx=\mu^2x$$. Because $$M$$ is invertible, $$\mu\ne0$$ and therefore $$Yx\in\ker(M-\mu)^2\setminus\ker(M-\mu).$$ Likewise $$(M-\mu)Y^2x=2\mu^2Yx+2\mu^3x,$$ hence $$Y^2x\in\ker(M-\mu)^3\setminus\ker(M-\mu)^2$$. Eventually, using again $${\rm char}(k)=3$$, we have $$(M-\mu)Y^3x=0.$$ We deduce that a basis $$E_\mu$$ is obtained by taking a basis $$\cal B$$ of $$\ker(M-\mu)$$, and adjoining the vectors of $$Y\cal B$$ and $$Y^2\cal B$$ ; all of them are linearly independent, as seen above. Thus $$\dim E_\mu=3\dim\ker(M-\mu)$$. |
## Sunday, November 27, 2011
### Finalizing arbitrary spin coupling
As expected, my work with Sympy slowed drastically once school started, but nevertheless, I have found enough time to polish off the coupling of arbitrary number of spin spaces that I started over the summer. I'll probably wait until after school is done (and the initial Google Code-In traffic dies down) before opening a pull request, but it has neared the state of conclusion, but I will outline the work done on the branch here.
A notable change from the summer is the coupling and uncoupling code is now much cleaner. The old methods used messy while True: loops which would increment some parameters and check if some end condition was reached, which I found very unsatisfactory and open to some weird use case throwing it into complete disarray. The new methods utilize the notion that any coupling or uncoupling will occur such that there is a well defined change in either the j (in the case of coupling) or m (in the case of uncoupling) values from their maximal values, and this change can be applied over the (un)couplings in the same way you can distribute n balls in m boxes, then it is just matching an integer to a given state and check that the given state is physically feasible.
In addition, I have added all necessary documentation for the new functionality and fixed a few other minor issues with other parts of the new code. I may yet change some of the handling of the j_coupling parameter, but I will reevaluate that when I have more time to look at the code after I finish the semester.
The passing of quantum numbers to define the couplings and un-couplings is still quite verbose, but I see no better way of passing the parameters, hopefully in review someone will see a better way of defining states and couplings.
## Friday, August 19, 2011
### Finishing GSoC
So this is the last week of the GSoC program. I'll be writing up a full report on what I've done over the summer here and it will be updated over this next week. This blog post will be recapping this last week of progress and looking forward past the GSoC.
The main thing to report with this last week was the finishing the work on the spin coupling work that was laid out last week and the writing of the code for Coupled spin states, the last pull request I'll get in during the GSoC project is currently open and should only need a last bit of code review to get pulled.
The main thing now is moving beyond the work that will be done during the GSoC project. While I'll be starting classes this next week and I have my qual the next week, so work will definitely slow down. However, this last week, I worked on the multi_coupling branch, which takes the coupling work that is in the current pull and expands it to allow for an arbitrary number of spin bases. The first thing to implement with this was a means of representing the coupling between the spin bases, since the order in which spaces are coupled matters. To do this, I added a jcoupling option to the functions that deal with coupled states. It currently seems pretty messy, but I'm not sure of a better way to do it, as coupling multiple spaces will just pick up a bunch of additional quantum numbers that need to be represented somehow. Basically, this parameter is passed as a list of lists, where each element of the outermost list represents a coupling between two spin spaces. These inner lists have 3 elements, 2 giving the number of the space that is being coupled and the third being the j value of these spaces coupled together. For example, if we wanted to represent a state |j,m,j1,j2,(j12),j3>, the jcoupling would be ( (1,2,j12), ). If this option is not set, then the methods default to coupling the spaces in numerical order, i.e. 1 and 2, then 1,2 and 3, etc. Using this, I have been able to rewrite the uncouple code. The results do not yet have tests, and I'll definitely need to do some calculations by hand to make sure this is working properly, but looking at it, I am pretty confident in the results, tho the code could use some cleaning up.
Moving forward from this would be to get the couple method working with arbitrary spin spaces and run through all of the functions that deal with spin coupling and make sure nothing is still hard coded to use two spin bases. Other than that, the project that I'd set out to work on has been basically completed. I'll continue to work with and develop sympy when I have some spare and hopefully continue to add features and functionality to the quantum module.
## Friday, August 12, 2011
### Getting coupled_spin merged
The biggest development this week was working out what is needed to get the coupled_spin which implements spin coupling merged back into master. There were some things to clean up with non-spin modules and a few minor things to address, but in cleaning this up, there will be some big changes to the way spin coupling works. First, with respect to things that have been implemented, rewrite and represent will no longer handle the coupling and uncoupling of states. To do coupling and uncoupling, instead, a couple and uncouple method will be created to handle the coupling and uncoupling of states. In addition, coupled states will now be represented by new classes, J?KetCoupled for the Cartesian directions. These will be returned by rewrite when a TensorProduct is coupled and will return the proper vector for the coupled space when it is represented and can be uncoupled when an uncoupled operator acts on it.
Most of these new changes have been implemented to varying degrees. There is some functionality lacking, but much of what remains for this is to implement tests for the new functions and make sure everything is working properly.
The coupling of arbitrary number of spin spaces had made slow progress due to some ambiguity when coupled states were created using normal states, but with the new Coupled classes, specifying the coupling should be possible, thus making the computations easier.
## Friday, August 5, 2011
### Moving beyond first coupling iteration
In the last week, one of the main things I did was to submit a pull request for the coupled spin machinery that I have been working on. This pull request can be seen here. This implements the coupling and uncoupling operations for states and operators and how these states and operators interact for coupling of two spin states. This pull still has some kinks to work out and some details to iron out, but should be finished up soon.
Moving beyond this pull, the rest of this week has been in working on modifying the coupling methods developed in this pull and making them work for an arbitrary number of spin spaces. The current idea will be to pass a tuple of j values which are to be coupled instead of passing j1 and j2 parameters. While this would work, it would be nice to be able to define how the terms are coupled, noting that the order of how the spaces are coupled matter in determining the coefficients and what will be diagonal in the basis of the coupled states. The current way I am working the coupling is to couple j1 and j2, then couple this to j3, etc. I have currently changed the all the methods to accept the tuple of j values, however, the coupling and uncoupling methods have not been changed to accept arbitrary numbers of spaces. Most of this week has been thinking and trying to determine a good way to implement this machinery that scales to arbitrary numbers of spaces. While it is not directly necessary for dealing with spin states, I will likely also implement Wigner-6j/9j/12j coefficients in cg.py, which will be very similar to the Wigner-3j symbols that were implemented with the Clebsch-Gordan coefficients.
While I am starting to work on this final component of my project, it will be a close call as to whether or not it can get pushed in time to make it in before the end of the project, which will be in just 2 weeks. The initial coupling stuff should get in, but this will be a much closer call. That said, I will definitely see this last part of the project into master.
## Saturday, July 30, 2011
### Finishing current coupled spin work
This last week I made some good headway towards finishing up the coupled spin state work for the coupling of two spin spaces. The decision was made that spin states should not contain any information as to their coupling, which greatly simplifies not only the code, but also the allowable cases when it comes doing things such as applying operators, rewriting, etc. As such, I am very close to finalizing this stage in the coupled spin work. I will try to fix up the implementation for some symbolic cases that should be doable under the current implementation, but all the current code has tests implemented and docstrings in place, so a pull request will be coming up shortly.
With this stage finishing, I will be moving on to generalizing the current implementation to coupling between more than two spin spaces. I will first need to expand cg.py to include Wigner-6j/9j/etc symbols to describe the coupling between these additional spaces. The logic for spin states will need to be reworked as well, not only to implement these new terms for coupling additional spin spaces, but most of the logic will need to be reworked to allow for an arbitrary number of coupled spin spaces.
While the change to get rid of what would be considered a coupled spin state (that is a state where the state has defined the coupled spaces) does simplify the current implementation, it does limit what can be done. For example, an uncoupled operator could not be applied to a coupled state, as the coupled states would need to be uncoupled, which is only possible if the j values of the coupled states is known. However it was suggested by Brian that a new class be created to deal with coupled states in this sense. Time permitting, I will begin to look at the possibility of implementing such a feature into the current spin framework.
## Friday, July 22, 2011
### Improving rewrite and represent for coupled/uncoupled states
This last week, most of the coding I have done has been working on getting represent working properly for coupled and uncoupled states. After doing a quick double check on what the basis vectors of a coupled or uncoupled state would be, I was able to get this code in. Tests for the represent logic will still need to be added, but so far it seems to be working properly.
In addition, I modified the rewrite logic to implement the represent method. This way all of the coupling and uncoupling logic is taken care of by represent, just as the represent method also takes care of all rotations of coordinate bases. To simplify the rewrite logic, I also implemented a vect_to_state, which returns a linear combination of states given any state vector when provided with the appropriate parameters, to specify coupled or uncoupled and what the j1 and j2 parameters are.
In addition to this work, I also wrote up the shell of the class that would handle tensor products of operators. However, in its current state, it doesn't function as one would expect, as the _apply_operator_* methods are not being called by qapply. This, in addition to noting that there is very little logic that is in the TensorProductState class has been making me think I can move most of the logic for states and operators that are uncoupled out of the spin class, implementing it instead in places like qapply and represent. The only trick would be the uncoupled-> coupled logic, which is just about the only bit of logic that the TensorProductState class has that couldn't necessarily be generalized, and the loss of the j1/j2/m1/m2 properties. I will be trying to do this in the coming week, which will in turn fix the problems I am having with getting tensor products of states to work.
## Saturday, July 16, 2011
### Developing coupled/uncoupled states and operators
Most of this last week was spent developing coupled and uncoupled states, beginning to develop how operators will act on these states and writing tests to ensure the code returns the desired result. This week I finished up writing the code for expressing states, and the logic for rewriting from one to the other and back. In addition to this, I implemented the tests which are used for these rewrites. This mostly finishes up the logic for the coupled/uncoupled states, there is still the represent logic which may need to be implemented, tho this will take some looking into to determine what is appropriate and necessary to implement.
For the operators, using the qapply logic already in place, I have begun to implement how operators act on coupled and uncoupled states. I have thus far only implemented logic for coupled operators, that is, for example Jz = Jz_1 + Jz_2(=Jz x 1 + 1 x Jz in an uncoupled representation). In addition to defining how uncoupled product states are acted upon by spin operators, I have expanded those already implemented methods to act on arbitrary states, as they had only previously been defined in how they act on JzKet's. This was done by defining a basis, such that, with the now improved rewrite logic, any state can be rewritten into an appropriate basis for the state and the state in then acted upon by the operator. I have begun to implement the tests that ensure the implemented logic is valid in all cases, both numerical and symbolic, tho this is still a work in progress.
The focus for this next week will be continuing the development of the spin operators, hopefully getting to working with uncoupled spin operators, i.e. operators given in a tensor product to only act on one of the uncoulped states, and developing the tests necessary to the implementation of these states. If I can complete this, I will be closing in on the completion of the coupling of two spin spaces.
## Friday, July 8, 2011
### Cleaning up simplification and moving into coupled states
So, as I stated in my last post, the first thing I dealt with was fixing up the _cg_simp_add method by implementing pattern matching and move the logic for determining if the simplification can be performed and performing the simplification out of the _cg_simp_add method and developing a system that can easily be expanded to include additional simplifications. To do this, I created another method, _check_cg_simp, which takes various expression to determine if the sum can be simplified. Using Wild variables, the method takes an expression which is matched to the terms of the sum. The method uses a list to store the terms in the sum which can be simplified, so additional expressions are used to determine the length of the list and the index of the items that are matched. There are also additional parameters to handle the leading terms and the sign of the terms. There are still some issues with this method, as when there is more than 1 Clebsch-Gordan coefficient in the sum, then the leading term cannot be matched on the term.
In addition to the finishing of this component of the Clebsch-Gordan coefficient simplification, I have started into working on the coupled spin states and the methods to rewrite them in coupled and uncoupled bases. Coupled spin states are set by passing j1 and j2 parameters when creating the state, for example
>>> JzKet(1,0,j1=1,j2=1)
|1,0,j1=1,j2=1>
These states can be given in the uncoupled basis using the rewrite method and passing coupled=False, so:
>>> JzKet(1,0,j1=1,j2=1).rewrite(Jz, coupled=False)
2**(1/2)*|1,1>x|1,-1>/2-2**(1/2)*|1,-1>x|1,1>/2
This can also be done with a normal state and passing j1 and j2 parameters to the rewrite method, as:
>>> JzKet(1,0).rewrite(Jz, j1=1, j2=1)
2**(1/2)*|1,1>x|1,-1>/2-2**(1/2)*|1,-1>x|1,1>/2
How the coupled states will be handled by rewrite still needs to be addressed, but that will need some thinking and with another GSoC project doing a lot of changes to the represent function, it may take some coordination to get this and the TensorProducts of states and operators working.
Note that in the python expressions above, the states are given as uncoupled states written as tensor products. Uncoupled states will be written as TensorProduct's of states, which will be extended later to spin operators, being written in the uncoupled basis as a TensorProduct. I've just started playing with the uncoupled states and the various methods that will be used to go from uncoupled to coupled states and I've been putting them in a separate TensorProductState class, which subclasses TensorProduct, which keeps all the spin logic separate from the main TensorProduct class, tho this will have to be expanded to include operators. Developing the logic for these uncoupled spin states will be the primary focus of this next week of coding.
## Saturday, July 2, 2011
### Continuing GSoC work
This last week, I have made progress on my project working on laying the base work for the spin states and in reimplementing logic in the cg_simp method for Clebsch-Gordan coefficients.
First, I have started the work on the implementation of coupled/uncoupled spin states. Currently, this is implemented by adding a coupled property to the spin states. This can be set to True for coupled, False for uncoupled or left as None for other states. As this evolves, I will move to having uncoupled product states be represented by a TensorProduct of two spin states. The next key will be establishing represent and rewrite logic for these spin states. Part of this will be figuring out how exactly these methods will work and what they will return. Namely, the represent method, noting that when representing an uncoupled state as a coupled state, it returns states with multiple j values, which under the current logic, would return matrices of different dimension. Also, we will have to determine what represent will do to uncoupled tensor product spin states. This next week, I will likely rebase this branch against the CG branch so I can start using the Clebsch-Gordan coefficients to implement these functions as the CG pull is finalized.
With the Clebsch-Gordan coefficients, this last week I was able to get the simplification of symbolic Sum objects working. I did this using the pattern matching built into sympy with Wild and .match. The final step with this should be to rework the logic of _cg_simp_add to make it easier to add in additional symmetries.
## Saturday, June 25, 2011
### Transitioning to spin states
This last week, the first thing that was taken care of was finishing the x/y/z spin basis representation. Having fixed the Wigner small d-function in the Rotation class, the tests for this were put into the pull request and the pull was merged into the sympy master, making this my first pull request since starting GSoC. There is still some changes that will come for the Rotation class, namely the creation of a symbolic WignerD class which is returned by the current Rotation.D and Rotation.d functions, but that will be dealt with in a later pull request.
With the x/y/z basis stuff finally out of the way, I moved back to getting the Glebsch-Gordan coefficient/Wigner-3j symbols to a state where they can be pulled. Having fallen behind in getting the CG coefficient simplification to a suitable state and with the work on the x/y/z spin basis pushing the timeline back even further, the current goal is to merge what I have so far and move on to the coupled spin states. What I have so far is the classes for the Wigner-3j symbols and the Clebsch-Gordan coefficients which can be manipulated symbolically and evaluated, and a very rough version of the cg_simp method. Currently, this method can handle 3 numerical simplification, however the code is still messy and having more cases would be ideal. That said, in an effort to make sure the key parts of this GSoC project are covered, I'll be moving into writing the coupled spin states.
For the spin states portion of this project, I will develop a means of writing coupled and uncoupled spin states. The uncoupled product basis states will be written using the TensorProduct, which is in the current quantum module; each of the states in the tensor product will be states as they are currently implemented. To represent coupled basis spin states, the current spin states will be modified to included a coupled parameter. This value stores the J_i's of the spin spaces which are being coupled. In addition to the spin states being implemented, methods will be written to utilize the CG coefficients mentioned earlier to go between coupled and uncoupled basis representation. Look for more next week as this code is fleshed out.
## Saturday, June 18, 2011
### More CG simplification and wrapping up x/y/z spin bases
For the first part of this last week, I continued on my work to get sums of Clebsch-Gordan coefficients to simplify. Using the same general logic that I outlined in the last blog entry, besides general cleaning up of the code, most of the work at the beginning of the week was spent on trying to develop a function that could check an expression for CG coefficients matching a set of conditions.
The rationale behind trying to write such a method is that it would make it much easier to identify the times where symmetries could be utilized. With such a method, the process of checking for CG coefficients could be done in a single function and the logic for implementing CG symmetries could be handled in this one function. The current method uses lists of tuples to specify the conditions on CG coefficients. For example, if the j1 value of a CG coefficient needed to be =0, you could pass the tuple ("j1",0), or if the m1 and m3 values match ("m1","m3"). All the conditions for each CG coefficient are combined into a single tuple. The current snag with simplifications of sums of products of CG coefficients. For example:
$\sum_{\alpha,\beta}{C_{a\alpha b\beta}^{c\gamma}C_{a\alpha b\beta}^{c'\gamma'}}=\delta_{cc'}\delta_{\gamma\gamma'}$
While the current method would be able to check for specific values on the CG coefficients, I have yet to come up with a good way to check that the m1 and m2 values are the same when they can take any value, as in this example. As it stands, this code still seems like quite a hack and will need some work before it is good to go.
What is left with this part of the project is:
- Getting simplification to work with sums of products of terms (as in the example above)
- Applying CG symmetries to perform simplifications
- Simplification of symbolic CG sums
- Fixing up the printing of CG terms
- Final testing/documentation
This part of the project has unfortunately fallen behind the preliminary schedule by a bit, as it was due to be finished up last week. I'll outline what I'm currently working on finish up next, but hopefully I can finish the CG stuff ASAP so I can move on to working on the spin stuff which is the true meat of the project and try to get back on schedule.
After meeting with my project mentor, Ondrej, on Wednesday, it was decided that the focus would shift to finishing up the work I'd started on x/y/z spin bases and representation of spin states that I'd started before GSoC had officially started.
The first order of business was identifying an error in the Wigner small-d function, which is used extensively in the changing of spin bases. With Ondrej noting that the small-d function was defined only on a small interval and then me discovering the bug in the Rotation.d method, we were able to address this. However, no sooner had this been done than Ondrej is able to work out a better equation for the small-d function, which will likely replace the current implementation.
Other than this, most of the work this week on the x/y/z basis representation was in documentation, testing and generally cleaning up the code to be pulled. The current pull request (my first work to be submitted since the start of GSoC) is still open here. While this pull integrates the current work on basis representation, after this pull there is still some work that will need to be done testing both the Wigner small-d and the D functions, for both symbolic and numerical values, and ensuring they return the correct results. Because the representation code relies so heavily on these functions, it is imperative that these functions evaluate properly. Once these are fully tested, there will also likely need to be more tests to ensure all the representation code returns the right values for as many odd cases as would be necessary to test. Hopefully I can finish this up soon and move on to other work that still needs to be done.
## Saturday, June 11, 2011
### More improvements to the simplification method
I was out of town for the beginning of this week, so I don't have as much to report, nevertheless in the last few days, I have made some good improvements to the cg_simp method, allowing it to simplify cases other than just the simple sums, tho it still only handles the same case as before involving the sum of . While it is entirely possible I'm doing something stupid in performing all the checks, as far as I can tell, it works for the given case. Because the code itself is not yet very clear and it is not always straightforward what is happening, I'll explain what I have implemented.
First note that for the simplifications that will be made, it is required to have a sum of terms. The first for loop in the method constructs two lists cg_part and other_part, the former consisting of all terms of the sum containing a CG coefficient and the latter the other terms.
Next, we iterate over the list cg_part. The number of CG coefficients is computed, which determines which simplifications can be made (currently, the only implemented simplification uses terms with 1 CG coefficient in each sum term). Those terms with the proper number of CG coefficients are then considered for the simplification. The way this will work is: based on the properties of the CG coefficient in the term, it will search the rest of the list for other terms that can be used in the simplification, and if all the terms exist, will simplify the terms.
Turning to the implementation, when iterating through the list, the first thing to do is determine the properties of the CG coefficient terms, that is to extract from the term in the sum the CG coefficient itself and the numerical leading terms. Here it is also noted the sign of these numerical terms.
Next, the rest of the list is checked to see if a simplification can be made using the determined term. To keep track of this, a list, cg_index, is initialized with False as the element of the list. In checking the later terms, we preform a similar decomposition as with the first term, that is splitting up the CG coefficient from the other components of the term, determining the CG coefficient, the leading terms and the sign of the terms. If the properties of these are correct, then the corresponding element of cg_index is updated with a tuple (term, cg, coeff) where term is the term in cg_part (so this element can be removed later), the CG coefficient and the leading numerical coefficient of the coefficient.
Now, if all the elements of cg_index are changed, the simplification is preformed. When this happens, first we find the minimum coefficient of the chosen CG coefficients, which determines the number of times we can apply the simplification. Then the replacing happens; for each element in cg_index (which is a tuple) the first element of the tuple is popped off cg_part, then, if the term is not eliminated by the simplification, a new term is created and added to cg_part, and finally a constant is added to other_part, completing the simplification.
Looking at the code, this method is very straightforward, but should be robust and scalable for treating cases of sums with numerical leading coefficients, and now that the i's have been dotted and the t's have been crossed on testing this method, implementing new cases should come rapidly in the next couple days. However, one place where this will still need some work is in implementing symbolic simplification, both in dealing with symbolic leading terms on the CG coefficients and symbolic CG coefficients themselves. This will take a bit of thought and likely a bit of help to complete, but this is one thing I hope to work on in the next week. In addition, as the simplification comes into place, I'll work on polishing out the last of the details to get the classes for the Wigner3j/CG coefficients working properly.
## Tuesday, May 31, 2011
### Implementing Clebsch-Gordan symmetries and sum properties
In this first week of the GSoC project, I focused on implementing methods that would simplify terms with Clebsch-Gordan coefficients. This still has a long was to go, but I will outline what I have done so far.
The first step was implementing means of dealing with sums of single coefficients. This would hopefully look something like:
>>>Sum(CG(a,alpha,0,0,a,alpha),(alpha,-a,a+1))
2*a+1
The first implementation of this used an indexing system that was able to index single coefficients, which could then be processed. This allowed the simplification function to act properly in simple numerical cases, so it could do things like:
>>>cg_simp(CG(1,-1,0,0,1,-1)+CG(1,0,0,0,1,0)+CG(1,1,0,0,1,1)+a)
a+3
The problem with this implementation is doing something as simple as having one of the terms have a constant coefficient would break it. In addition, there would be no clear way to extend this to sums involving products of multiple Clebsch-Gordan coefficients.
To deal with this, I started working a solution that could deal with having constant coefficients and products of coefficients. Currently implemented is method which creates list of tuples containing information about the Clebsch-Gordan coefficients and the leading coefficients of the Clebsch-Gordan coefficients. Currently, the only implemented logic is only able to deal with the case in that could be dealt with in the previous implementation, however, this should be able to expand to encompass more exotic cases.
Another thing that was touched on this last week was treating symmetries. These are quite simple to implement, as they need only return new Clebsch-Gordan coefficients in place of old ones, just with the parameters changed in correspondence with the symmetry operation. The key will be using these symmetries to help in simplifying terms. This will be based on the development of better logic in the simplification method and the implementation of some means of determining if these symmetries can be used to apply some property of the Clebsch-Gordan coefficients that can simplify the expression.
I will be out of town this next week on a vacation, and will not be able to get work in, but I will continue working on this when I return, with the intention of getting it to a state that can be pushed within the next couple weeks.
## Tuesday, May 24, 2011
### Official GSoC start
This week marks the official start of the Google Summer of Code. While I started getting my feet wet last week after finishing the last of my finals and grading, the bulk of the work has just started turning out. I'll quick cover what I have from this last week and what I'm looking to get working this week.
Before the start, I worked out expanding the functionality of the currently implemented x/y/z bases, which work I have here. The previous implementation only allowed for evaluating inner products between states in the same basis and representing the states in the Jz basis and then only with j=1/2 states. Using the Wigner D-function, implemented with the Rotation class, I implemented represent to go between the x/y/z bases for any j values. Both _eval_innerproduct and _rewrite_as were then created to take advantage of the represent function to extend the functionality of the inner product and to implement rewrite between any bases and any arbitrary j values.
This seems like this is some documentation and tests away from being pushed, but there is something buggy with the Rotation.d function, implementing the Wigner small d-matrix. I noticed when trying to do
>>>qapply(JzBra(1,1)*JzKet(1,1).rewrite('Jx')
and I wasn't getting the right answer. As it turns out, the Rotation.d function, which uses Varshalovich 4.3.2 Eq 7, does not give the right answer for Rotation.d(1,1,0,-pi/2) or Rotation.d(1,0,1,pi/2). Namely, there is something wrong with the equation that doesn't change the sign of the matrix element when reversing the sign of the beta Euler angle. Running all four differential representations given by Varshalovich for the small d matrix, Eq 7-10, give the wrong result, so the derivations of these will need to be checked to fix this. I have a bug report up here.
As for what I will be implementing this week, I already have the basics of the Wigner3j and CG class implemented, the work for this going up here. This includes creating the objects, _some_ of the printing functionality and numerical evaluation of the elements using the wigner.py functions. The meat of the class that I'm currently working on is the cg_simp function, which will simplify expressions of Clebsch-Gordan coefficients. I currently have one case working, that is
Sum(CG(a,alpha,0,0,a,alpha),(alpha,-a,a)) == 2a+1
which is Varshalovich 8.7.1 Eq 1. There are still some things to smooth out with the implementation, but I should have that worked out a bit better, in addition to some more simplifications by the end of the week.
That's all I have for now, watch for updates within the week as to what I've gotten done and what I have yet to do.
## Monday, April 25, 2011
### The best things in life are free (and open source)
Hi all,
This adventure into blogging is to document the work I will be doing this summer with SymPy, an open source symbolic mathematics library written in Python. The project will be done as a part of the Google Summer of Code program. This summer, I will be developing a symbolic class for creating Clebsch-Gordan coefficients and will develop the spin algebra in the existing quantum physics module to utilize these coefficients. The gory details can be read in my application. My mentor for this project will bOndřej Čertík. The project officially starts May 24, but I'll be diving in once I finish up the last of my finals May 14. I will be documenting my progress on the project throughout the summer through this blog, and anyone interested in this is free to watch for updates once the project is underway. All my work will be pushed to my SymPy fork on github.
That's enough for now, I'll be checking back in once I get started in the summer. Now, back to trying to graduate. |
Question:
A tank car is stopped by two spring bumpers A and B, having stiffness ${ k }_{ A }$ and ${ k }_{ B }$ respectively. Bumper A is attached to the car, whereas bumper B is attached to the wall. If the car has a weight W and is freely coasting at speed ${ v }_{ c }$ determine the maximum deflection of each spring at the instant the bumpers stop the car.Given: ${ k }_{ A }$ = 15 × ${ 10 }^{ 3 }$ lb/ft , ${ k }_{ B }$ = 20 × ${ 10 }^{ 3 }$ lb/ft , W = 25 × ${ 10 }^{ 3 }$ lb , ${ v }_{ c }$= 3 ft/s |
# Finding an equation to represent a jerk
1. Aug 15, 2009
### hover
hey everyone,
The title of this thread explains what I'm trying to do. I want to find an equation that can represent a jerk. A jerk is a change in acceleration with respect to time. What I really want though is an equation for a jerk that doesn't have a time variable in it. I think I know the answer but I want people to back up my answer so here we go.
We know that this is true
$$j=j$$
A jerk is equal to a jerk.... this really doesn't tell us anything about a jerk. So we now integrate by dt and we get
$$\int jdt= jt+C_1$$
The units of this equation are equal to an acceleration so this equation must be this
$$jt+a_i=a_f$$
Of course, we can keep integrating till we get down to distances so i'll show those now without doing the work.
$$\frac{jt^2}{2}+a_it+v_i=v_f$$
$$\frac{jt^3}{6}+\frac{a_it^2}{2}+v_it=d$$
Ok so now I have three equations that I could at least try solving for a jerk but there is a problem. All these equations involve time. The next logical thing that I can do is to substitute "t" into an equation, but first I need an equation to solve for time. I'll use the first equation.
$$jt+a_i=a_f$$
$$jt=a_f-a_i$$
$$t=\frac{a_f-a_i}{j}$$
Now I'll substitute that into the second equation.
$$\frac{jt^2}{2}+a_it+v_i=v_f$$
$$\frac{j(a_f-a_i)^2}{2j^2}+\frac{a_i(a_f-a_i)}{j}+v_i=v_f$$
then I simplify and try to get j on one side
$$\frac{a_f^2-2a_fa_i+a_i^2}{2j}+\frac{a_fa_i-a_i^2}{j}+v_i=v_f$$
$$\frac{a_f^2-2a_fa_i+a_i^2}{2j}+\frac{a_fa_i-a_i^2}{j}=v_f-v_i$$
$$\frac{a_f^2-2a_fa_i+a_i^2}{2j}+\frac{2a_fa_i-2a_i^2}{2j}=v_f-v_i$$
$$\frac{a_f^2-2a_fa_i+a_i^2+2a_fa_i-2a_i^2}{2j}=v_f-v_i$$
$$\frac{a_f^2-a_i^2}{2j}=v_f-v_i$$
$$\frac{a_f^2-a_i^2}{v_f-v_i}=2j$$
$$\frac{a_f^2-a_i^2}{2(v_f-v_i)}=j$$
So the final equation I get is this
$$j=\frac{a_f^2-a_i^2}{2(v_f-v_i)}$$
but is this right? Lets check the units. On top we have meters^2 per second^4 and on the bottom we have meters per second
$$\frac {m^2}{s^4}* \frac{s}{m}= \frac {m}{s^3}$$
Well at least the units are right but is this equation right?
Thanks for helping me out!!
2. Aug 15, 2009
### drizzle
looks fine, is jerk stands for the word jerk I know or am I missing something here?
why would you need an equation to represent a jerk
3. Aug 15, 2009
### hover
I was just extremely curious. I've been wanting to know an equation for a jerk for awhile now. What fueled my curiosity even more was the extreme lack of equations for a jerk. I have been searching everywhere for an equation that can represent a jerk. From my old physics book from high school and on the internet. I could never find an equation.
4. Aug 15, 2009
### DrGreg
In physics, "jerk" means "rate of change of acceleration", in the same way that "acceleration" means "rate of change of velocity" and "velocity" means "rate of change of distance".
5. Aug 15, 2009
### DrGreg
To be honest, I haven't checked whether your equation is right or not. It sounds plausible. The reason you haven't found an equation is the books is because you are assuming the jerk remains constant over time. In the real world, that's unlikely to be true, so your equation isn't really of much practical use. (In my view.)
6. Aug 15, 2009
### drizzle
I see, he did mention that in his post
7. Aug 15, 2009
### Creator
Could someone tell me why all the latex equations used in , say the first post, come out BLACK....cannot read them....maybe my Computer Settings ??
8. Aug 15, 2009
### ideasrule
That equation's right because a common kinematic equation is Vf2-Vi2=2ad. For jerk, velocity corresponds to distance, acceleration to velocity, and jerk to acceleration, so the equation for jerk should logically be Af2-Ai2=2jv
9. Aug 16, 2009
### gmax137
take a look at the thread "why is the math output hard to read sometimes" in the forum feedback section. I bet you're using internet explorer 6...
10. Aug 16, 2009
### DrGreg
You are probably using Internet Explorer v6 or some earlier version. If you are allowed to install software on the computer you use, you should upgrade to either a later version of Internet Explorer, or some other browser such as Firefox or Safari. There was a discussion about this towards the end of this thread (post #63 onwards). If you can't install software, all you can do is click on each equation to see the LaTeX that was used to create it.
Oops...gmax137 beat me to it!
11. Jan 3, 2010
### danielatha4
Xf=Xo+Vot+1/2aot2+1/6Jt3
af2=ao2+2J(Vf-Vo)
I'm not 100% on that second one |
Transience of 3-dimensional Brownian motion
I'm attempting Exercise 5.33 of Le Gall's Brownian motion, Martingales and Stochastic Calculus.
Let $$B_t$$ be a 3-dimensional Brownian motion starting from $$x$$.
Part 6 asks me to show that $$|B_t| = |x| + \beta_t +\int_0^t\dfrac{ds}{|B_s|} \quad (*)$$
where $$\beta_t = \sum_{i=1}^3 \int_0^t \dfrac{B^i_s}{|B^i_s|} dB^i_s$$.
I have shown this by Ito's formula, and I have shown that $$\beta_t$$ is a 1-dimensional Brownian motion (by calculating the quadratic variation and using Levy's characterisation).
Now part 7 says
Show that $$|B_t| \rightarrow \infty$$ as $$t \rightarrow \infty$$ a.s. (Hint observe that $$|B_t|^{-1}$$ is a non-negative supermartingale.)
So I'm not sure show to show the transience. I feel like it could follow from (*), since if $$|B_t|$$ does not tend to infinity then the integral on the right hand side must tend to infinity, but then $$|B_t|$$ must tend to infinity to balance this (not sure how to make this rigorous).
My other idea is to say since $$|B_t|^{-1}$$ is an $$L^1$$ bounded supermartingale it converges a.s., and to show $$|B_\infty|^{-1} = 0$$, but again, I'm not sure exactly how to do this.
You're on the right track near the end. As a non-negative supermartingale, $$|B_t|^{-1}$$ converges almost surely; call the limit $$X$$. On the event $$\{X \ne 0\}$$, we have $$|B_t|$$ converging to the finite limit $$1/X$$. But intuitively it is absurd for a Brownian motion to do that (it is trying to "wiggle", not "settle down") and so you should be able to show that the probability of $$|B_t|$$ converging to a finite limit is 0.
As one way to show this, you presumably know that for a one-dimensional Brownian motion $$b_t$$, we have $$\limsup_{t \to \infty} b_t = +\infty$$ and $$\liminf_{t \to \infty} b_t =-\infty$$ almost surely. Since $$|B_t| = \sqrt{(b_t^1)^2 + (b_t^2)^2 + (b_t^3)^2} \ge |b_t^1|$$, we see that $$\limsup_{t \to \infty} |B_t| = +\infty$$ almost surely.
• Thanks, based on your answer I've tried to write a complete proof. Do you think it is legitimate? – AlexanderR Mar 16 '20 at 19:13
• By the way, if you happen to have the book and have looked at the question, I was wondering if you know what the purpose of the first few questions is? It seems like this result about the transience of three dimensional Brownian motion can actually be proved quite easily and without much preamble. In question 8 it shows that this $|B_t|^{-1}$ process is a strict local martingale, but that also doesn't need much preamble, so I am wondering if I am missing something. For example what is the point of the representation (*) in my question? Thank you so much for your help. – AlexanderR Mar 16 '20 at 19:32
• @AlexanderR: (I got ahold of the book.) It seems to be just a collection of useful facts about multidimensional Brownian motion, not all necessarily leading up to the transience. Though of course parts 4 and 5 are certainly needed to show that $|B_t|^{-1}$ is a supermartingale. – Nate Eldredge Mar 16 '20 at 23:20
• Ah okay that wasn't obvious to me. The fact that they said 'observe' made me think it was clear to see directly, and I thought I had a proof just by applying Jensen inequality twice with $1/x$ and $|x|$, but I realise I had one of the inequalities the wrong way round. Could you give me a clue as to how questions 4 and 5 help here? Obviously if we could upgrade the 'continuous martingale' property from question 4 to 'martingale' that would do it, but we know that's impossible. Other than that, I'm not very familiar with the relationship between supermartingale and local martingales. Thanks! – AlexanderR Mar 16 '20 at 23:44
• @AlexanderR: Part 4 shows that $|B_{t \wedge T_\epsilon}|^{-1}$ is a continuous local martingale. Indeed, it's bounded by $1/\epsilon$ so it's a continuous nonnegative martingale. That is, for any $s<t$ and any $0<\epsilon<|x|$ we have $$E[|B_{t \wedge T_\epsilon}|^{-1} \mid \mathscr{F}_s] = |B_{s \wedge T_\epsilon}|^{-1} \tag{*}.$$ Now from part 5 you can conclude that $T_\epsilon \to \infty$ as $\epsilon \to 0$, almost surely. So let $\epsilon \to 0$ in (*) and apply Fatou's lemma. This shows that $|B_t|^{-1}$ is a supermartingale. – Nate Eldredge Mar 17 '20 at 0:41
So would this be an answer?:
Since $$|B_t|^{-1}$$ is a non-negative supermartingale bounded in $$L^1$$, we have for some non-negative $$Y \in L^1$$,
$$|B_t(\omega)|^{-1} \rightarrow Y(\omega)$$ for almost all $$\omega \in \Omega$$. But since $$\limsup |B_t| = \infty$$, $$\liminf |B_t(\omega)|^{-1} = 0$$. But also, since $$|B_t(\omega)|^{-1}$$ converges, $$\liminf |B_t(\omega)|^{-1} = \lim |B_t(\omega)|^{-1} = Y$$. So $$Y=0$$ a.s. So $$\lim |B_t| = \infty$$ a.s.
• Yes, that works. – Nate Eldredge Mar 16 '20 at 20:14 |
# Homework Help: Finding E and B field of a weird charge distribution
Tags:
1. Jan 13, 2017
### asdff529
1. The problem statement, all variables and given/known data
Initially there is a spherical charge distribution of with a radius $R_0$ and uniform charge density $ρ_0$. Suppose the distribution expands spherically symmetrically such that its radius at time t is $R_0 + V t$, where V is the velocity. Assuming the density remain uniform inside the sphere as time increases, find the charge density and current density,E-field and B-field
2. Relevant equations
4 maxwell equations and continuity equation
3. The attempt at a solution
I have computed $ρ$ and $J$, so does E. I want to know if $J=ρV$ in r direction.Then I find that curl of E=0, which means B is independent of time, which is strange. And I find it very complicated to solve the Ampere's Law with Maxwell Correction.
Any hints? Thank you
2. Jan 13, 2017 |
# Getting around the effects of low Mars gravity
If my Mars colonists don't want to be crippled, they need to bear their own weight as close to Earth normal as possible for their waking hours. How about shoes (leather from goats and sheep) with magnetic bits engineered inside the soles. Then, for the other part of the equation, what about hard or soft flooring that contains iron or steel particles. If that won't work, what about belt, shoes, or other clothing containing weighted fabric or inserts? Now, how about the babies who can’t walk and may never do so if their bones aren’t strengthened in some manner? What about the food animals? Can they put shoes on the animals' (sheep, goats, etc.) hooves such as horses wear? What do we do about fowl? Can a magnetic device be attached the the bottom of their feet? I can’t think how artificial gravity could work on a planet’s surface. No convenient alien technology here, just orphaned settlers.
• What issues are you aware of with low gravity? There's quite a lot of issues that are associated with low gravity, not all of which are helped by such shoes. Understanding which of those issues you perceive as important may help direct our answers. – Cort Ammon Sep 21 '16 at 4:00
• Russian astronauts have survived +500 days in absolutely no gravity, and they wer not "crippled". Their possibilities for exercise on MIR were rather limited, compared to ISS or spending the day as farmer and construction worker on Mars. – Karl Sep 22 '16 at 3:13
• The Defense Dept would consider 500 days a TDY (temporary duty assignment). These colonists would be conceived, born, and live their entire lives in 1/3 Earth normal gravity. "Bone remodels in response to stress in order to maintain constant strain energy per bone mass throughout. To do this, it grows denser in areas experiencing high stress, while resorbing density in areas experiencing low stress. On Mars, where gravity is about one-third that of earth, the gravitational forces acting on astronauts' bodies would be much lower, causing bones to decrease in mass and density.: (Wikipedia). – Phyllis Stewart Sep 22 '16 at 18:31
The first generation born on mars could be genetically manipulated to be better fit for the enviroment. Similar solution can work for animals. In fact we don't know much about how low gravity will affect pregnancy.
Aside from that, I agree with Andreas Heese's answer.
As per the effects that low gravity has, this flow chart by the NSBRI (National Space Biomedical Research Institute) should serve as guideline:
You may also want to see how Men and Women Adapt Differently to Spaceflight:
Of course Mars has other challeges such as the lack of magnetic field, the ambient temperature, and the isolation and psicological stress of the settlers.
• Mars is not microgravity. – Karl Sep 22 '16 at 2:08
• @Karl yeah, you are right. Mars gravitational accelarion is 0.378g, what's that, decigravity? As long as we don't have information on the effects of prolonged decigravity on humans - microgravity should serve as guideline. – Theraot Sep 22 '16 at 2:51
• It's a bit less than half of earths gravity. Why would there be any problems? – Karl Sep 22 '16 at 3:06
• @Karl for same reason they are in micro gravity, we are not used to be in different gravity, and our systems heavily exploit gravity presence, in way which will wonder you. For the moment no one knows will there be or will it not to be a problems with mars gravity. Do you never wondered yourself, how how for god shake 1 cell is able to convert to adult mammal? We talking about very complex biological machine, and our current knowledge do not allow us to predict results, so preparing to worst case scenario is pretty wise if there will be problem, if not, fine, better for us. – MolbOrg Sep 22 '16 at 14:12
• Flying animals adapt best to low gravity (1/3 Earth normal). Fish will swim in circles unless a light source is provided to give them proper orientation for "up and down." The fetus must orient head down before birth. Don't you think that is a matter of gravity assistance? If land and water animals have trouble, of course human fetuses will as well. Wikipedia: Bone remodels in response to stress in order to maintain constant strain energy per bone mass throughout. To do this, it grows denser in areas experiencing high stress, while resorbing density in areas experiencing low stress. – Phyllis Stewart Sep 22 '16 at 18:45
I am assuming you did some research and found out that under orbital gravity (ISS), bones lose density. I also assume that this is fully true, understood well, and also applies for low gravity of mars. I am not sure about that, or if it even is a problem at all, but i don't want to research or discuss that, as it is out of focus for the question.
Putting on heavy shoes does not increase your bone stability. It doesn't even increase the "weight" of your body, it's just dead weight... on your shoes. While it becomes harder to walk, it will strengthen your muscles, but to increase bone density, you need weight to REST on the bones. So you'd need to shoulder huge weights, so they'd push down on your body. Wearing super heavy clothing might help (and it might look cool having everyone walk around in platemail).
But i think the basic problem of bones requiring stress to fully develop couldn't be handled this way. These days in space, bone density problems are countered by intensive special training. Maybe you can have facilities where adults can train hard to keep their bodies healthy? I have no idea how to solve the problem for animals or infants, though. Maybe a special diet or medicine could solve the problem for you?
• +1 for martians wearing full platemail. It also makes sense with their intensive training in... martian arts. There is definitely not enough media about martian knighnjas™ – xDaizu Sep 21 '16 at 12:02
• What about the fetuses, infants, and children. They are most at risk. I am not proposing heavy shoes. The idea is to create a higher magnetic pull by incorporating pairs of attracting elements in the footwear and the flooring. Gravity creates the needed stressors for bone formation and maintenance. A properly calculated magnetic pull could, I believe, create the same effect. Otherwise, unless the problem is solved, long term colonization on planets with gravity less than 90% Earth normal will never be possible. – Phyllis Stewart Sep 22 '16 at 18:36
• Your idea of special facilities and periods of intense training made me think of the low-tech solution of merry-go-rounds all over the place. Those spinning (everyone takes turns) are working their muscles and getting exercise, those riding get the extra "weight" (acting like gravity to stress the bones) - especially if the merry-go-rounds are constructed with walls or a deeply sloped floor, so people walk inside and more of that force can run head-to-feet not just sideways. Kids and even infants should be able to play inside, walk or crawl for extra gravity-like stress while developing. – Megha Sep 23 '16 at 1:26
Who says there are any significant health problems under mars gravity? We know that most effects of absolutely no gravity can be fought well with one hour of physical exercise (as can a lot of earth health problems, btw. ;-)), and Mars still has $0.4 g$. I think any planet able to hold an atmosphere at civilised temperatures is totally unproblematic with respect to gravity.
The main problem esp. early Russian cosmonauts experienced is loss of bone density and weakening muscles, like what is known on earth from people with e.g complicated bone fractures, requiring extremities to be immobilised for an extended period of time.
Bone (& muscle) tissue does not grow in response to gravity, but to the forces the muscles exert on them. The problem in zero g is that practically nothing does require any force at all. So you put an hour of exercise on the schedule of astronauts, and it helps a lot. As does very careful exercising for recuperating patients. They are not 100% fit when they return to earth, or to normal life, but far from having turned in wracks. The earliest Russian long-term cosmonauts had to be carried on stretchers after return. When the space shuttle brought back long term crew members from ISS, they walked down the gangway on their own.
A person that lounges about all day and watch TV turns sick, on Mars probably faster than on earth. Luckily there is no TV there, at least in the beginning. But work on Mars would still be work, especially outside in your (rather heavy and rigid) pressure suit. Ten kg rucksack feels like 4 kg only? Have one twice as large!
I wouldn't bet that raising children on Mars would be as uncritical. I certainly would not do it on a space station without a lot of animal experiments before. Maybe have a centrifugal kindergarten. ;-)
But the basic message would be the same: It's not gravity, it is your musclework that keeps the bones healthy.
• they show you only that, because what else to show)) Which chemicals they are using, they will never tell you. I bet doping sportsmen's are working for space program)) most likely indirectly) – MolbOrg Sep 22 '16 at 14:19
• Persons living their entire lives on a planet with .3 Earth normal gravity and who weigh only half what they would on Earth will have significant bone formation challenges. Physical exercise must be weight bearing and be calibrated to replicate Earth's normal effects. Infants, children, and the elderly will be unable to participate. No chemicals can mimic gravity's effects. A process or application to produce the effects of artificial gravity must be developed to enable long term space colonization. – Phyllis Stewart Sep 22 '16 at 18:53
• u might find it interesting to read link , link , link This one is funny: selecting candidates for testing take look at exclusion criteria, there are obvious reasons, but still. – MolbOrg Sep 22 '16 at 21:41
Same as we do in space, I should think. Use weight training and anchored treadmills to retain as much muscle tone as possible in an effort to stave off osteoporosis. I don't see that having heavy shoes will do anything other than tone up the leg muscles.
Andreas answered as I typed the above, and he's right.
For children, you just have to do the best that you can. Over the course of many generations, they would ideally start to evolve and adapt to the new gravity, but the chances are that your colony will die out before then - to my mind, it'll be doubtful that enough children will survive to procreate more than another generation or two unless you were really strict on the weight training.
Obviously, the more time taken up in physical exercise/conditioning, the less time can be used for more productive (survival orientated) tasks. Chances are that the exercise will come second place.
I don't think you'd have large animals - chickens would probably be the best source of food and you could keep those in small/low coops to prevent them from harming themselves. Horses/goats/cows don't really have a place on Mars - if there were there, then they'd be an immediate food source.
• If the problem is solved for humans, it would be solved for all animals as well. Sheep, goats, pigs, fowl, fish, and rabbits could all be good protein sources. Cows are obviously far too inefficient for use. A wide variety of foods must be provided for health and emotional well-being. Remember, this is generations or persons living all their lives on Mars. I'm afraid genetic manipulation would be of doubtful effect. This is a purely physical question. – Phyllis Stewart Sep 22 '16 at 18:58
Large Jumps? A person of normal bodily strength on mars should be able to jump a lot higher than on earth, lift heavier things, leap further. If you design the base without stairs or use some other design that forces them to do these things regularly then your people could maintain normal fitness by leaping around and hanging to the ceiling. Alternatively you can make a parabolic floor and rotate it so that real and centrifugal gravity combine.
• Great idea! Lets give them centrifugal beds! xkcd.com/123 – Karl Sep 22 '16 at 11:30
• The bones do not need aerobic exercise. They need weight-bearing activity. Besides, you can't teach toddlers to perform large jumps. Vitamin and mineral supplements won't have any effect. Weight-bearing only. In addition, the colonists will weigh only half what they would on Earth. This significantly compounds the issue. – Phyllis Stewart Sep 22 '16 at 18:49
One important question is, will they ever return to earth? 0.4g is not 0g and I bet body will adapt to this environment with muscle and bone loss, but not too much to be crippled.
End result? If they are to return Earth, they will not be able to walk unassisted. But on Mars they would probably be even healthier. |
Finding inverses in Z mod n
1. Jun 19, 2012
tonit
1. The problem statement, all variables and given/known data
Let's say I want to find the inverse of $\bar{4}$ in $\mathbb{Z}_{13}$.
So I get $13 = 4\cdot 3 + 1$ and so $1 = 13 - 4\cdot 3$.
But this doesn't show that $3$ is inverse of $4$. So I have to express $4 = 3\cdot 1 + 1$
which yields that $1 = 4 - 1\cdot 3 = 4 - 3\cdot (13 - 3\cdot 4) = 10\cdot 4 - 3 \cdot 13$ from where I get that $\bar{10}$ is inverse of $\bar{4}$ mod $13$.
So which is the right way for finding inverses in Zn? I'm attaching a screenshot from my book
Attached Files:
• ATATATATATA.JPG
File size:
13.5 KB
Views:
145
Last edited: Jun 19, 2012
2. Jun 19, 2012
I like Serena
Hi tonit!
You're trying to find x with $4x \equiv 1 \pmod{13}$.
Since you already have $13 = 4\cdot 3 + 1$, it follows that $4 \cdot 3 \equiv -1 \pmod{13}$.
That's almost, but not quite what you need.
So let's multiply left and right by -1.
Then you get:
$$4 \cdot -3 \equiv 1 \pmod{13}$$
Much closer!
It follows that $4^{-1} \equiv -3 \equiv 10 \pmod{13}$.
Note that with bigger numbers you typically need the euclidean algorithm to find the inverse.
For that, I suspect you'll need to learn how to apply this algorithm.
3. Jun 19, 2012
tonit
4. Jun 19, 2012
HallsofIvy
More generally, to find the inverse of a, modulo n, you are looking for an integer b< n such that ab= 1 (mod n) which is the same as saying ab= kn+ 1 for some integer n.
That is the same as solving the Diophantine equation ab- kn= 1 where a and n are known and you want to find integers b and k. To find the multiplicative inverse of 4 modulo 13, we want to solve 4b- 13k= 1 and can do that using the Euclidean Algorithm as I like Serena suggests:
4 divides into 13 three times with remainder 1 so we have immediately 13(1)= 4(3)+ 1 so that 4(-3)+ 13(1)= 4(-3)- 13(-1)= 1. One solution is a= -3 which is the same as -3+ 13= 10 mod 13.
Here's how that would work for "bigger numbers": If the problem were, say, to find the multiplicative inverse of 24 modulo 111, we would look for x such that 24x= 1 (mod 113) which is the same as 24x= 113k+ 1 or 24x- 113k= 1.
24 divides into 113 four times with remainder 17 so that 113(1)- 24(4)= 17. 17 divides into 24 once with remainder 7: 24(1)- 17(1)= 7. 7 divides into 17 twice with remainder 3: 17(1)- 7(2)= 3. Finally, 3 divides into 7 twice with remainder 1: 7- 3(2)= 1.
Replacing that "3" in the last equation with 17(1)- 7(2) from the previous equation gives 7- (17(1)- 7(2))2= 7(5)- 17(2)= 1. Replacing that "7" with 24(1)- 17(1) gives (24(1)- 17(1))(5)- 17(2)= 24(5)- 17(7)= 1. Replacing that "17" with 113(1)- 24(4) gives 24(5)- (113(1)- 24(4))7= 24(33)- 113(7)= 1.
That is, one solution to 24x- 113k= 1 is x= 33, k= 7 and tells us that the multiplicative inverse of 24, modulo 113, is 33: 24(33)= 792= 7(113)+ 1.
Last edited by a moderator: Jun 20, 2012
5. Jun 20, 2012
tonit
Thank you HallsofIvy. It's all clear now :D |
# Diluting acetic acid to obtain a solution of pH 5 [closed]
What percent of a solution needs to be acetic acid for a particular $\mathrm{pH}$? I have 100% acetic acid, and want $1~\mathrm{L}$ of solution with a $\mathrm{pH}$ of 5. I've done some research, but I still do not understand how to solve this problem.
• The original question was fine as it is. But then someone edited and it seem's like that this is purely a homework question which it was not and was why I decided to answer. Apr 19 '15 at 15:09
This is a dilution problem. So in this instance, you want to add deionized water to your acetic acid. However, the question is, how much water do you add?
It is necessary first to find the molarity of the acetic acid. On PubChem, we see that the molecular weight is $60.05196 \ \frac{\text{g}}{\text{mol}}$ and density is $1.0446 \ \frac{\text{g}}{\text{cm}^3}$ @ $25\ ^\circ \text{C}$.
If we have 1 L of acetic acid, then
$$1.0446 \ \frac{\text{g}}{\text{cm}^3} \cdot \frac{1000\ \text{cm}^3}{1\ \text{L}} \cdot \frac{1\ \text{mol}}{60.05196\ \text{g}} = 17.394\ \frac{ \text{mol}}{\text{L}}$$
We know that,
$$\text{pH} = -\log{\ce{[H3O+]}}$$
which implies that the concentration of hydronium ions to make the solution with pH equal to 5 is, $\ce{[H3O+]} = 1 \cdot 10^{-5} \ \text{M}$. We make our RICE table,
$$\ce{HC2H3O2 + H2O -> C2H3O2- + H3O+}$$
\begin{array} {|c|c|c|c|c|} \hline \text{Initial conc.} & x \ \text{mol} & - & \text{0 mol} & \text{0 mol}\\ \hline \text{Change conc.} & -1 \cdot 10^{-5}\ \text{M} & - & 1 \cdot 10^{-5}\ \text{M} & 1 \cdot 10^{-5}\ \text{M}\\ \hline \text{End conc.} & x - 1 \cdot 10^{-5}\ \text{M} & - & 1 \cdot 10^{-5}\ \text{M} & 1 \cdot 10^{-5}\ \text{M}\\ \hline \end{array}
$$K_\text{a} = \frac{[\ce{H3O+}][\ce{CH3COO-}]}{[\ce{CH3COOH}]} = \frac{(0.00001)(0.00001)}{(x - 0.00001)} = 1.75 \cdot 10^{-5}$$
$$\frac{(1 \cdot 10^{-10})}{(x - 0.00001)} = 0.0000175$$ $$1 \cdot 10^{-10} = 0.0000175x - 1.75 \cdot 10^{-10}$$ $$2.75 \cdot 10^{-10} = 0.0000175x$$ $$x = 1.6 \cdot 10^{-5}\ \text{M} = [\ce{CH3COOH}]$$
Thus, you go from a $17.394\ \text{M}$ solution of acetic acid, to a $0.000016\ \text{M}$, which is a factor of $1.09\cdot10^{6}$. You can then dilute the solution by the appropriate amount by placing the amount of acid you want in a volumetric flask, and then successively dilute it to the required pH.
Two assumptions,
1. Deprotonation is small enough that the equilibrium concentration of the acid is approximately equal to the same as its initial concentration.
2. Autoprotolysis of water does not significantly contribute to pH.
• Very good answer, in particular the use (and demonstration) of the RICE table
– user15489
Apr 19 '15 at 8:23
• why use density? Apr 19 '15 at 9:57
• @ADG I used density to easily find the molarity of 100% acetic acid. If you look at the chain-link calculations above, you can see I get the molarity that I wanted. For reference, the pH is roughly 1 with purely 100% acetic acid which is interesting since acetic acid is a "weak acid". Apr 19 '15 at 15:07
What percent of a solution needs to be acetic acid for a particular pH? I have 100% acetic acid, and want 1 L of solution with a pH of 5. I've done some research, but I still do not understand how to solve this problem.
You need to know that if you add water moles remian same so $\rm M_1V_1=M_2V_2$. You also need to know $\rm pH=-\log[H^+]$ form where you get concentration of $\rm H^+$. Now for weak acids the cubic equation used to determine pH gets some terms negligible hence we use $\rm [H^+]=\sqrt{K_aM_2}$ where $\rm K_a$ is very common and I remember it as $1.7\times10^{-5}\sim10^{-4.7}$. |
## Sectioning a chain of operators and dot as reverse application
I have a meager syntax proposal and am curious if anyone has explored this design, knows of a language that uses it, or can think of a problem with it. It's comprised of two aspects that work together to support certain idioms in a neighborhood of Haskell syntax.
The first aspect is to generalize sectioning of binary operators to chains of binary operators where the first and/or last operand in the chain is missing. Here's a GHCi interaction:
Prelude> (1 + 2*)
The operator *' [infixl 7] of a section
must have lower precedence than that of the operand,
namely +' [infixl 6]
in the section: 1 + 2 *'
So Haskell doesn't like the expression (1 + 2*). The proposal is that this would instead be a section of an operator chain, equivalent to (λx. 1 + 2*x). Similarly for (*2 + 1).
The second aspect is to use dot for reverse application (rather than composition). That is,
x.f = f x
Dot and juxtaposition would have equal precedence allowing an OOP-like style (after modifying map and filter to take the list first):
[1, 2, 3].map (+1).filter (>2) === filter (map [1, 2, 3] (+1)) (>2)
In combination with the first aspect, we can recover composition as:
f compose g = (.g.f)
And in general we can capture the tails of OOP-like expressions like this:
frob = (.map (+1).filter (>2))
Has anyone used or seen this approach or does anyone see problems with it?
## Comment viewing options
### x.foo(y) = foo(x,y)
I've seen several languages with a feature of the form x.foo(y) is syntactic sugar for foo(x,y). I'm pretty sure most of them were multimethod languages. Unfortunately, I'm blanking on which languages these were. None were languages I've much used, only studied in passing. I just know there are several of them. And users/authors seemed pretty happy/smug about it. :)
Personally, I think it's a good idea for making OO classes more openly extensible, and certainly more pleasant and generic than Haskell's awkward backquote form (where x f y = f x y, but x f 2 y is badly formed).
### I've seen several languages
I've seen several languages with a feature of the form x.foo(y) is syntactic sugar for foo(x,y). I'm pretty sure most of them were multimethod languages. Unfortunately, I'm blanking on which languages these were.
### Python works that way:class
Python works that way:
class myClass:
 def myFunc(self, name):
  self.name = name
(new myClass()).myFunc("Joe") == myFunc(new myClass(), "Joe")
### Unfortunately, I'm blanking
Unfortunately, I'm blanking on which languages these were.
It's present in the D language, where it is named UFCS (Universal Function Call Syntax). It allows you to write code like:
[1, 2, 3].map!q{a + 1}.filter!q{a > 2}
### I was thinking D might have
I was thinking D might have been one of them, but I couldn't find the documentation for it when I looked. (I didn't think to look for "UFCS".) Thanks.
### That's nearly what scala has
That's nearly what scala has with its underscore variable. (1+2*_) and _.map(_+1).filter(_>2)`
### Dot as reverse function application (in Haskell)
Note that any proposal to change the behaviour of dot will meet stiff opposition. (That wiki page has a link to a thread.)
### Thanks
That link looks 100% on point.
### Linguistics
The missing operand proposal lends itself to ambiguities, e.g. when there are more than one missing operands.
One area to look for a solution might be natural language syntax, e.g. https://en.wikipedia.org/wiki/Theta_role
### Can you give an example?
The missing operand proposal lends itself to ambiguities, e.g. when there are more than one missing operands.
What's an example?
### Superficial notational observation
The dot normally is used to select a method relative to an object which is an instance of a class. In PL terminology, I prefer to use the definition for a class as a "reinstantiable module declaration."
You can introduce dots but what's the point if you don't have any OO semantics to accompany it?
This reads like trivialized bickering that if OO has dots then an FP language should have them too without regard for what the semantics of method selection in an OO language is. Yeah, FP can have a dot notation too, so what?
### Not so superficial
Marco, ahem, dot suffix notation long predates OO or indeed computer languages. (For example Russell & Whitehead's Principia Mathematica 1910.) Did anybody complain about consistent semantics when Codd and others introduced dot notation into query languages? Perhaps OO should stop using it because OO doesn't follow SQL's semantics?
If you'd followed around some of those Haskell links before making a superficial response you would have found:
A practical reason that many text editors are tuned for dot suffixes to prompt for valid names to follow. In the case of OO that goes Object --> prompt for method. In FP it could go argument --> function; record --> field; etc.
A mental image reason (call it semantics if you will) that some software design 'flows' better from focus to action (or noun to verb). Compare GUIs where you point at the screen and right-click for the action, as opposed to a typical green-screen flow of first choose the action from a menu then hunt for a record in a list.
If you don't like dot as reverse function application, nothing's forcing you to use it. Personally, I strongly dislike Haskell's use of dot as function composition -- even though I've programmed in Haskell for years.
### Just sharpening the conversation
Historical fallacious argument. I can pull any observation out of history to make any argument seem correct.
UI's are geared towards OO is actually completely in line with my observation that people want OO but won't get it. As is the second argument. There probably already is an operator which allows you to write "x ^ f" instead of "f x", so whatever noun-verb-order argument is moot.
People wanted imperative programming and got it in a broken form in terms of monadic programming in Haskell too, so my guess is they'll go ahead and implement this. But all you get is a reverse application/field selector and that just isn't that useful in a pure functional language.
(If you'll bite you'll show me how to do pure functional OO with a dot and type classes. Yeez. Shouldn't you bite now?)
. |
# Math Help - interesting question
1. ## interesting question
Hi,
I have a set of data that belong to a company revenue generated through many different products.
In total 2000 producst.
So the data is like this
Product Revenue
Product 1 $2000 The list goes on to Product 2000 and revenue varies. The question is to determine which products to promote. There is no set budget. So based on revenue the question asks to suggest which products to promote. (Company principle is to promote higher revenue making products) but how do you determine the most optimum point. I sorted all valued from high to low and created a chart. Then added a linear trend line. And I was going to take the intercepting point as the optimum point? Any ideas please? 2. Originally Posted by kotum45 Hi, I have a set of data that belong to a company revenue generated through many different products. In total 2000 producst. So the data is like this Product Revenue Product 1$2000
The list goes on to Product 2000 and revenue varies.
The question is to determine which products to promote. There is no set budget.
So based on revenue the question asks to suggest which products to promote. (Company principle is to promote higher revenue making products) but how do you determine the most optimum point.
I sorted all valued from high to low and created a chart. Then added a linear trend line.
And I was going to take the intercepting point as the optimum point? |
## Vanishing of the fundamental gap for (horo)convex domains in hyperbolic space
Friday, February 26, 2021 - 12:00pm to 1:00pm
## Speaker
Xuan Hien Nguyen
Associate Professor
Iowa State University
## Abstract
For the Laplace operator with Dirichlet boundary conditions on convex domains in $H^n, n ≥ 2$, we prove that the product of the fundamental gap with the square of the diameter can be arbitrarily small for domains of any diameter. This property distinguishes hyperbolic spaces from Euclidean and spherical ones, where the quantity is bounded below by $3 \pi^2$. We finish by talking about horoconvex domains.
## Description
Contact Julien Paupert for the Zoom link. |
You are here:Home >>Aptitude Questions>>Trigonometry
# Trigonometry Aptitude Questions
Quantitative Aptitude Questions and Answers section on “Trigonometry” with solution and explanation for competitive examinations such as CAT, MBA, SSC, Bank PO, Bank Clerical and other examinations.
1.
The circular measure of an angle of an isoceles triangle is 5π/9. Circular measure of one of the other angles must be
[A]$\frac{4\pi }{9}$
[B]$\frac{2\pi }{9}$
[C]$\frac{5\pi }{9}$
[D]$\frac{5\pi }{18}$
$\frac{2 \pi}{9}$
Sum of remaining two angles = $\pi -\frac{5\pi }{9}=\frac{4\pi}{9}$
∴ Each angle = $\frac{1}{2}\times \frac{4\pi }{9}=\frac{2\pi }{9}$
Hence option [B] is the right answer.
2.
$\left ( \frac{3\pi }{5} \right )$ radians is equals to :
[A]120°
[B]180°
[C]108°
[D]100°
108°
$\because \pi radian=180$°
$\therefore \frac{3\pi }{5} radian = \frac{180}{\pi}\times\frac{3\pi }{5}$
=108°
Hence option [C] is the right answer.
3.
If $0\leq \theta \leq \frac{\pi}{2}$ and $\sec ^{2}\theta +\tan^{2}\theta=7$, then Θ is:
[A]$\frac{\pi}{5}$ $Radian$
[B]$\frac{\pi}{6}$ $Radian$
[C]$\frac{5\pi}{12}$ $Radian$
[D]$\frac{\pi}{3}$ $Radian$
$\frac{\pi}{3}$ $Radian$
Given Expression, $\sec ^{2}\theta +\tan ^{2}\theta =7$
$=>1+\tan ^{2}\theta+\tan ^{2}\theta=7$
$=>2 \tan ^{2}\theta=7-1=6$
$=>\tan ^{2}\theta=3$
$=>\tan \theta=\sqrt{3}$
$\because \tan 60$°$=\sqrt{3}$
$\therefore \theta=60$°
$\because 180$°=$\pi Radian$
$\therefore 60$°$= \frac{\pi }{180}\times60=\frac{\pi }{3} Radian$
Hence option [D] is the right answer.
4.
In circular measure, the value of the angle 11°15′ is :
[A]$\frac{\pi^{c}}{16}$
[B]$\frac{\pi^{c}}{8}$
[C]$\frac{\pi^{c}}{4}$
[D]$\frac{\pi^{c}}{12}$
$\frac{\pi^{c}}{16}$
11°15′
$=> 11\textdegree$ + $\frac{15\textdegree}{60}$
$=> 11\textdegree$ + $\frac{1}{4}$ = $\frac{45\textdegree}{4}$
$=>$ [180° = π Radian]
$\therefore \frac{45^{\circ}}{4} = \frac{\pi}{180}\times \frac{45}{4}=\frac{\pi^{c}}{16}$
Hence option [A] is correct answer.
5.
In a triangle ABC, $\angle ABC=75\textdegree$ and $\angle ACB=\frac{\pi^{c}}{4}$. The circular measure of $\angle BAC$ is:
[A]$\frac{\pi}{6}$$Radian$
[B]$\frac{\pi}{2}$$Radian$
[C]$\frac{5\pi}{12}$$Radian$
[D]$\frac{\pi}{3}$$Radian$
$\frac{\pi}{3}$$Radian$
$\angle ABC = 75^{\circ}$
$[\because 180 \textdegree = \pi radian]$
$75^{\circ} = \frac{\pi}{180}\times 75 = \frac{5 \pi}{12}$ $Radian$
$\therefore \angle BAC = \pi - \frac{\pi}{4} - \frac{5 \pi}{12}$
$=> \frac{12\pi-3\pi -5\pi}{12} = \frac{4 \pi}{12}$
$=> \frac{\pi}{3}$$Radian$
Hence option [D] is the right answer.
6.
The degree measure of 1 radian is :
[A]$57\textdegree32'16''$
[B]$57\textdegree61'22''$
[C]$57\textdegree16'22''$
[D]$57\textdegree22'16''$
$\mathbf{57\textdegree16'22''}$
$\therefore 1 radian = \frac{180\textdegree}{\pi}$
$= \frac{180\times7\textdegree}{22}$
$= \frac{630}{11} = 57\frac{3}{11}\textdegree$
$= 57\textdegree\frac{3}{11}\times 60' = 57\textdegree\frac{180'}{11}$
$= 57\textdegree16'\frac{4}{11}\times 60'' = 57\textdegree16'22''$
Hence option [C] is the right answer.
7.
In the sum of two angles is $135\textdegree$ and their difference is $\frac{\pi}{12}$. Then the circular measure of the greater angle is :
[A]$\frac{\pi}{3}$
[B]$\frac{5\pi}{12}$
[C]$\frac{2\pi}{3}$
[D]$\frac{3\pi}{5}$
$\mathbf{\frac{5\pi}{12}}$
Two angles = A and B where A > B.
$\therefore A + B = 135\textdegree$
$= \left ( \frac{135\times \pi}{180} \right ) radian$
$=> A + B = \left ( \frac{3\pi}{4} \right ) radian$…..(1)
$A - B = \frac{\pi}{12}$…..(2)
$2A = \frac{3\pi}{4} + \frac{\pi}{12}$
$= \frac{9\pi+\pi}{12} = \frac{10\pi}{12} = \frac{5\pi}{6}$
$\therefore A = \frac{5\pi}{12}radian$
Hence option [B] is the right answer.
8.
If the sum and difference of two angles are $\frac{22}{9} radian$ and $36\textdegree$ respectively, then the value of smaller angle in degree taking the value of $\pi$ as $\frac{22}{7}$ is :
[A]$48\textdegree$
[B]$60\textdegree$
[C]$56\textdegree$
[D]$52\textdegree$
$\mathbf{52\textdegree}$
$\because \pi radian = 180\textdegree$
$\therefore \frac{22}{9}radian = \frac{180}{\pi}\times\frac{22}{9}$
$= \frac{180}{22}\times \frac{22\times7}{9} = 140\textdegree$….(1)
According to the question,
$A + B = 140\textdegree$
and, $A - B = 36\textdegree$ ……(2)
$2A = 176\textdegree$
$=> A = \frac{176}{2} = 88\textdegree$
From equation (1),
$\therefore 88\textdegree+B = 140\textdegree$
$=> B = 140\textdegree - 88\textdegree = 52\textdegree$
Hence option [D] is the right answer.
9.
If $\cos x+\cos y = 2,$ the value of $\sin x+\sin y$ is :
[A]-1
[B]1
[C]0
[D]2
0
$\cos x+\cos y = 2$
$\because \cos x\leq 1$
$=> \cos x = 1; \cos y = 1$
$=> x = y = 0\textdegree [\cos0\textdegree = 1]$
$\therefore \sin x + \sin y = 0$
Hence option [C] is the right answer.
10.
The minimun value of $2\sin ^{2}\theta +3\cos ^{2}\theta$is :
[A]0
[B]2
[C]3
[D]1
2
$2\sin ^{2}\theta +3\cos ^{2}\theta$
$=> 2\sin ^{2}\theta +2\cos ^{2}\theta +\cos ^{2}\theta$
$=> 2\left ( \sin ^{2}\theta +\cos ^{2}\theta \right ) + \cos ^{2}\theta$
$=> 2 + \cos ^{2}\theta$
Minimum value of $\cos\theta = -1$
But $\cos ^{2}\theta \geq 0, where \theta = 90\textdegree$
$[\cos 0\textdegree = 1, \cos 90\textdegree = 0]$
Hence required minimum value = 2 + 0 = 0
Option [B] is the right answer. |
# expectation involving normal pdf and Rayleigh distribution
I need to calculate following definite integral
\begin{equation*} \frac{1}{2\pi }\int_0^\infty \frac{x^2 e^{-x^2/\sigma^2 } }{\sigma} \frac{e^{-\frac{\lambda}{{ax^2+b}}}}{\sqrt{ax^2+b}} ~~dx. \end{equation*}
It is actually finding the expected value of xϕ(λ/√(ax^2+b)) , where ϕ(.) is pdf of a standard normal distribution and x is a random variable with Rayleigh distribution with parameter σ.
• any solution will help, even in terms of numerical functions or even approximations – Alireza May 6 '15 at 20:52 |
Under the auspices of the Computational Complexity Foundation (CCF)
REPORTS > DETAIL:
### Paper:
TR09-144 | 24th December 2009 17:46
#### An Invariance Principle for Polytopes
TR09-144
Authors: Prahladh Harsha, Adam Klivans, Raghu Meka
Publication: 24th December 2009 18:17
Keywords:
Abstract:
Let $X$ be randomly chosen from $\{-1,1\}^n$, and let $Y$ be randomly
chosen from the standard spherical Gaussian on $\R^n$. For any (possibly unbounded) polytope $P$
formed by the intersection of $k$ halfspaces, we prove that
$$\left|\Pr\left[X \in P\right] - \Pr\left[Y \in P\right]\right| \leq \log^{8/5}k \cdot \Delta,$$ where $\Delta$ is
a parameter that is small for polytopes formed by the intersection of regular'' halfspaces (i.e., halfspaces with low influence). The novelty of our invariance principle is the polylogarithmic dependence on $k$. Previously, only bounds that were at least linear in $k$ were known.
We give two important applications of our main result:
\begin{itemize}
\item A bound of $\log^{O(1)}k \cdot {\epsilon}^{1/6}$ on the Boolean noise
sensitivity of intersections of $k$ regular'' halfspaces (previous
work gave bounds linear in $k$). This gives a corresponding agnostic learning algorithm for intersections of regular halfspaces.
\item A pseudorandom generator (PRG) with seed length $O(\log n\,\poly(\log k,1/\delta))$ that $\delta$-fools {\em all} polytopes with $k$ faces with respect to the Gaussian distribution.
\end{itemize}
We also obtain PRGs with similar parameters that fool polytopes formed by intersection of regular halfspaces over the hypercube. Using our PRG constructions, we obtain the first deterministic quasi-polynomial time algorithms for approximately counting the number of solutions to a broad class of integer programs, including dense covering problems and contingency tables.
ISSN 1433-8092 | Imprint |
## Posts Tagged ‘Kepler’
### Let’s Read the Internet! Week 3
October 26, 2008
Self Control and the Prefrontal Cortex John Lehrer at The Frontal Cortex
Summarizes some research that indicates people only have a certain amount of willpower to ration out over the day. My first reaction to reading this article was to think, “yeah, but that’s only for weak people, not me.” My next reaction was to resist the temptation to check my email too frequently. My third reaction was to slaughter eight cats in a murderous frenzy, then to sit forlornly surveying the carnage I had wrought and wonder if this cycle would ever end.
Scott Belcastro’s Lonely Searching from Erratic Phenomena
I’ll admit I don’t know much about art, but I can tell when something looks cool. I saw how similar the paintings were, and felt surprised at first that people don’t get bored doing the same sort of thing over and over. But then I realized it must be because they’re refining, focusing down, and trying to work out subtleties and understand their subject more fully. Not that I see all the subtleties, exactly, but maybe if you read the text they actually talk about that stuff.
The Gallery of Fluid Motion
Videos of fluids being fluidy. Don’t get too excited, though. Despite what it sounds like, this is not a potty cam.
The Incredible Beauty of Hummingbirds in Flight RJ Evans at Webphemera
Small things can be pretty. They aren’t always pretty, which is sad news for your penis.
Is This The Oldest Eye On Earth? Tom Simonite on New Scientist
“It could be the oldest eye, or even human body part, still functioning or to have ever been in use for so long.” There’s a story for the grandkids.
The Laplace-Runge-Lenz Vector Blake Stacey at Science After the Sunclipse
A clever way to prove that orbits in a $r^{-2}$ potential are conic sections, without solving a complicated differential equation. I’m surprised we didn’t do this in ph1a, although I’m kind of glad we didn’t, because it makes me appreciate it much more now.
October 19, 2008
Here is the question I’m answering.
It’s a trick question! No possible height profile will perfectly reproduce Kepler orbits. The problem is that in the solar system, any given planet moves in two dimensions around the sun. But since the bowl is a curved surface, the balls wobble up and down through three dimensions, and you can’t match these different scenarios up perfectly.
The dynamics of a planet orbiting the sun come out of the Newtonian gravitational potential
$\Phi = -\frac{GM_{sun}m}{r}$
So you might think that if you just make the height of the bowl inverse proportional to the distance from the center, so that $h = -\frac{1}{r}$, the balls would follow Kepler orbits. After all, their potential energy would be the same as the potential energy of a planet orbiting the sun, right?
We need to look more closely at the variable $r$. For the case of a planet around the sun, $r$ is simply the distance from the planet to the sun. But for the balls circling the bowl, there are two possible interpretations of $r$. One interpretation is to take a string, lay it flat on the bowl, and measure the distance along the bowl to the center. That would be $r$. The problem with this approach is that the space is curved. If you were to measure the ratio of the circumference of a circle to its radius using $r$ defined this way, you would not get $2 \pi$, You would get something that depends on $r$. How could you then reproduce orbits through flat Newtonian space?
Instead of treating the surface of the bowl as a two-dimensional space, you might try to treat its projection as a two-dimensional space. So get up directly above the exhibit and look straight down at it with one eye closed. Then you’re looking at a flat space, so could you reproduce Kepler orbits there?
No, because the projection treats the radial and angular directions differently. If a ball has a true velocity of $1 \frac{m}{s}$ and is going around the center of the bowl in a circle, then in projection it still has an apparent velocity of $1 \frac{m}{s}$. On the other hand, if the same ball were plunging straight in towards the center, its velocity would appear slower by a factor of the slope of the bowl, because you wouldn’t notice the portion of its velocity that was up/down in real 3D space. The angle at which a ball appeared to be moving would be distorted by this effect.
If you designed the bowl so that the period of circular orbits followed Kepler’s third law, then in general the projections of balls wouldn’t follow conic sections any more. Projected angular momentum would not be conserved because real angular momentum is conserved, and the projection would hide different proportions of that at different times.
So, while the Kepler exhibit is cool to look at, as best I can tell you can’t truly make it mimic the orbits of planets around the sun.
### New Problem: The Kepler Exhibit
October 17, 2008
At the Exploratorium in San Francisco, you can play with this exhibit:
What should the height profile of the bowl be so that balls that roll without slipping (or, so that blocks sliding without friction) would reproduce 2-D Kepler orbits when viewed in projection from above? |
# Finding the settling time
1. Nov 15, 2008
### chagocal
1.
a)find time constant of the closed loop system.(take R(s)=1/s and D(s))
G(s)=k/(s+10),k=20
b) find the settling time(within 2% of the final value when R(s)=1/s and D(s)=1/s
solution for part 1a
H(s)=Y(s)/R(s)=G(s)/s/1+G(s)(1/s)/s= (k/s+10)/s/((1+(k/s+10)(1/s)/s)
constant time should equal to 1/30 sec
stuck on 1b need help
2. Nov 16, 2008
### Redbelly98
Staff Emeritus
Welcome to PF.
It has been a long time since I worked with Laplace transforms, but does the signal behave as a straightforward exponential decay towards the final value? If so, simply solve for when the exponential is 2%, using the time constant you have.
3. Nov 18, 2008
### CEL
Find the dominant poles of your TF. They should be of the form $$-\sigma\pm j\omega$$. The settling time is $$\frac{4}{\sigma}$$ |
# 2 Introduction
Conclusions often echo introductions. This chapter was completed at the very end of the writing of the book. It outlines principles and ideas that are probably more relevant than the sum of technical details covered subsequently. When stuck with disappointing results, we advise the reader to take a step away from the algorithm and come back to this section to get a broader perspective of some of the issues in predictive modelling.
## 2.1 Context
The blossoming of machine learning in factor investing has it source at the confluence of three favorable developments: data availability, computational capacity, and economic groundings.
First, the data. Nowadays, classical providers, such as Bloomberg and Reuters have seen their playing field invaded by niche players and aggregation platforms.4 In addition, high-frequency data and derivative quotes have become mainstream. Hence, firm-specific attributes are easy and often cheap to compile. This means that the size of $$\mathbf{X}$$ in (2.1) is now sufficiently large to be plugged into ML algorithms. The order of magnitude (in 2019) that can be reached is the following: a few hundred monthly observations over several thousand stocks (US listed at least) covering a few hundred attributes. This makes a dataset of dozens of millions of points. While it is a reasonably high figure, we highlight that the chronological depth is probably the weak point and will remain so for decades to come because accounting figures are only released on a quarterly basis. Needless to say that this drawback does not hold for high-frequency strategies.
Second, computational power, both through hardware and software. Storage and processing speed are not technical hurdles anymore and models can even be run on the cloud thanks to services hosted by major actors (Amazon, Microsoft, IBM and Google) and by smaller players (Rackspace, Techila). On the software side, open source has become the norm, funded by corporations (TensorFlow & Keras by Google, Pytorch by Facebook, h2o, etc.), universities (Scikit-Learn by INRIA, NLPCore by Stanford, NLTK by UPenn) and small groups of researchers (caret, xgboost, tidymodels to list but a pair of frameworks). Consequently, ML is no longer the private turf of a handful of expert computer scientists, but is on the contrary accessible to anyone willing to learn and code.
Finally, economic framing. Machine learning applications in finance were initially introduced by computer scientists and information system experts (e.g., , ) and exploited shortly after by academics in financial economics (), and hedge funds (see, e.g., ). Nonlinear relationships then became more mainstream in asset pricing (, ). These contributions started to pave the way for the more brute-force approaches that have blossomed since the 2010 decade and which are mentioned throughout the book.
In the synthetic proposal of , the first piece of advice is to rely on a model that makes sense economically. We agree with this stance, and the only assumption that we make in this book is that future returns depend on firm characteristics. The relationship between these features and performance is largely unknown and probably time-varying. This is why ML can be useful: to detect some hidden patterns beyond the documented asset pricing anomalies. Moreover, dynamic training allows to adapt to changing market conditions.
## 2.2 Portfolio construction: the workflow
Building successful portfolio strategies requires many steps. This book covers many of them but focuses predominantly on the prediction part. Indeed, allocating to assets most of the time requires to make bets and thus to presage and foresee which ones will do well and which ones will not. In this book, we mostly resort to supervised learning to forecast returns in the cross-section. The baseline equation in supervised learning,
$$$\mathbf{y}=f(\mathbf{X})+\mathbf{\epsilon}, \tag{2.1}$$$
is translated in financial terms as
$$$\mathbf{r}_{t+1,n}=f(\mathbf{x}_{t,n})+\mathbf{\epsilon}_{t+1,n}, \tag{2.2}$$$ where $$f(\mathbf{x}_{t,n})$$ can be viewed as the expected return for time $$t+1$$ computed at time $$t$$, that is, $$\mathbb{E}_t[r_{t+1,n}]$$. Note that the model is common to all assets ($$f$$ is not indexed by $$n$$), thus it shares similarity with panel approaches.
Building accurate predictions requires to pay attention to all terms in the above equation. Chronologically, the first step is to gather data and to process it (see Chapter 4). To the best of our knowledge, the only consensus is that, on the $$\textbf{x}$$ side, the features should include classical predictors reported in the literature: market capitalization, accounting ratios, risk measures, momentum proxies (see Chapter 3). For the dependent variable, many researchers and practitioners work with monthly returns, but other maturities may perform better out-of-sample.
While it is tempting to believe that the most crucial part is the choice of $$f$$ (it is the most sophisticated, mathematically), we believe that the choice and engineering of inputs, that is, the variables, are at least as important. The usual modelling families for $$f$$ are covered in Chapters 5 to 9. Finally, the errors $$\mathbf{\epsilon}_{t+1,n}$$ are often overlooked. People consider that vanilla quadratic programming is the best way to go (the most common for sure!), thus the mainstream objective is to minimize squared errors. In fact, other options may be wiser choices (see for instance Section 7.4.3).
Even if the overall process, depicted in Figure 2.1, seems very sequential, it is more judicious to conceive it as integrated. All steps are intertwined and each part should not be dealt with independently from the others.5 The global framing of the problem is essential, from the choice of predictors, to the family of algorithms, not to mention the portfolio weighting schemes (see Chapter 12 for the latter).
## 2.3 Machine learning is no magic wand
By definition, the curse of predictions is that they rely on past data to infer patterns about subsequent fluctuations. The more or less explicit hope of any forecaster is that the past will turn out to be a good approximation of the future. Needless to say, this is a pious wish; in general, predictions fare badly. Surprisingly, this does not depend much on the sophistication of the econometric tool. In fact, heuristic guesses are often hard to beat.
To illustrate this sad truth, the baseline algorithms that we detail in Chapters 5 to 7 yield at best mediocre results. This is done on purpose. This forces the reader to understand that blindly feeding data and parameters to a coded function will seldom suffice to reach satisfactory out-of-sample accuracy.
Below, we sum up some key points that we have learned through our exploratory journey in financial ML.
• The first point is that causality is key. If one is able to identify $$X \rightarrow y$$, where $$y$$ are expected returns, then the problem is solved. Unfortunately, causality is incredibly hard to uncover.
• Thus, researchers have most of the time to make do with simple correlation patterns, which are far less informative and robust.
• Relatedly, financial datasets are extremely noisy. It is a daunting task to extract signals out of them. No-arbitrage reasonings imply that if a simple pattern yielded durable profits, it would mechanically and rapidly vanish.
• The no-free lunch theorem of imposes that the analyst formulates views on the model. This is why economic or econometric framing is key. The assumptions and choices that are made regarding both the dependent variables and the explanatory features are decisive. As a corollary, data is key. The inputs given to the models are probably much more important than the choice of the model itself.
• To maximize out-of-sample efficiency, the right question is probably to paraphrase Jeff Bezos: what’s not going to change? Persistent series are more likely to unveil enduring patterns.
• Everybody makes mistakes. Errors in loops or variable indexing are part of the journey. What matters is to learn from those lapses.
To conclude, we remind the reader of this obvious truth: nothing will ever replace practice. Gathering and cleaning data, coding backtests, tuning ML models, testing weighting schemes, debugging, starting all over again: these are all absolutely indispensable steps and tasks that must be repeated indefinitely. There is no substitute to experience. |
# Quotient map from $\mathbb R^2$ onto $\mathbb R^2\setminus \{(0,0)\}$
Could you help me to find a quotient mapping from $\mathbb R^2$ onto $\mathbb R^2\setminus \{(0,0)\}$?
Assume the standard topology on both spaces.
Thank you.
-
Hint: polar coordinates. – Qiaochu Yuan May 12 '12 at 19:01
Do you know any complex analysis? Especially -- hint! -- do you know about the complex exponential function? – Pete L. Clark May 12 '12 at 19:01 |
# The uv rays in stratosphere break the cfc into chlorine atoms.chlorine reacts with ozone and breaks into oxygen.as oxygen is helpful to man.why it is harmful?
The amount of oxygen produced in this way is small (compared to the 22% of our atmosphere that is oxygen). The problem is we need that layer of ${O}_{3}$ to deal with the incoming UV light. It helps absorb some of that light and makes the suns rays less harmful for us (and plants). Without Ozone layer, too much UV gets through and can cause elevated cancer and reduced crops. |
# Dirac delta
1. Jan 27, 2005
### RedX
$$\delta(\frac{p_f^2}{2m}-E_i^o-\hbar\omega)=\frac{m}{p_f} \delta(p_f-[2m(E_i^o+\hbar\omega)]^{\frac{1}{2}})$$
Shouldn't the right hand side be multiplued by 2?
2. Jan 27, 2005
### dextercioby
:surprised :surprised
Mr.QM+QFT guru,you f***ed up big time... :tongue:
:surprised
Apply the THEORY:
$$f(x):R\rightarrow R$$(1)
only with simple zero-s (the algebraic multiplicies of the roots need to be 1).
Let's denote the solutions of the equation
$$f(x)=0$$(2)
by $(x_{\Delta})_{\Delta={1,...,N}}$ (3)
and let's assume that:
$$\frac{df(x)}{dx}|_{x=x_{\Delta}} \neq 0$$ (4)
Then in the theory of distributions there can be shown that:
$$\delta f(x)=\sum_{\Delta =1}^{N} \frac{\delta (x-x_{\Delta})}{|\frac{df(x)}{dx}|_{x=x_{\Delta}}|}$$ (5)
Daniel.
Last edited: Jan 27, 2005
3. Jan 27, 2005
### marlon
is there anyone else that can solve this problem in a clear and more mature manner. It's been a while since i worked with distributions in this way and it seems quite interesting to me. Can someone tell me what i did wrong ?
regards
marlon
4. Jan 27, 2005
### dextercioby
Okay:
$$f(x)\rightarrow f(p_{f})=\frac{p_{f}^{2}}{2m}-E_{0}^{i}-\hbar\omega$$ (6)
Solving the equation
$$f(p_{f})=0$$ (7)
,yields the 2 solutions (which fortunately have the degree of multiplicity exactly 1)
$$p_{f}^{1,2}=\pm \sqrt{2m(E_{0}^{i}+\hbar\omega)}$$ (8)
Computing the derivative of the function on the solutions (8) of the equation (7),we get,after considering the modulus/absolute value:
$$\frac{df(p_{f})}{dp_{f}}=\frac{\sqrt{2m(E_{0}^{i}+\hbar\omega)}}{m}$$ (9)
Combining (8),(9) and the general formula (5) (v.prior post),we get:
$$\delta (\frac{p_{f}^{2}}{2m}-E_{0}^{i}-m_{i})=\frac{m}{\sqrt{2m(E_{0}^{i}+\hbar\omega)}} \{\delta[p_{f}-\sqrt{2m(E_{0}^{i}+\hbar\omega)}]+\delta[p_{f}+\sqrt{2m(E_{0}^{i}+\hbar\omega)}]\}$$ (10)
which is totally different than what the OP had posted...
IIRC,when learning QFT,i always said to myself:theorem of residues and the thoery of distributions go hand in hand...
Daniel.
5. Jan 27, 2005
### marlon
Indeed, i just looked up the rule at hand. I get the same solution and i see where i went wrong in my first post. thanks for the polite correction.
marlon
i deleted my erroneous post
6. Jan 27, 2005
### da_willem
It's always nice to encounter the two of you in a post. All this harmony and warmth...
7. Jan 27, 2005
### marlon
yes, we really are the best of friends...
marlon |
# Thread: Linear combinations - matrices
1. ## Linear combinations - matrices
I'm confused about a problem for my matrix algebra class. The directions say to "Determine if b is a linear combination of a1, a2, and a3."
The problem I am confused on is:
(These are all 3 x 1 matrices, I don't know how to write out a matrix on the computer)
[1]
[-2] = a1
[0]
[0]
[1] = a2
[2]
[5]
[-6] = a3
[8]
[2]
[-1] = b
[6]
The next thing I did was I used the equation (x1)(a1) + (x2)(a2) + (x3)(a3) = b, then came up with the system of equations:
x1 + 0 + 5(x3) = 2
-2(x1) + x2 - 6(x3) = -1
0 + 2(x2) + 8(x3) = 6
I put those in an augmented matrix:
(3x4 matrix)
[ 1 0 5 | 2]
[-2 1 -6 | -1]
[ 0 2 8 | 6]
I used elementary row operations and got the matrix down to:
[ 1 0 5 | 2 ]
[ 0 1 4 | 3 ]
[ 0 0 0 | 0 ]
I don't know where to go from here. I don't think that b is a linear combination, but my professor only gave one example on how to do this and it was a little different than the one on the homework.
2. Does the system, as you have reduced it, have at least one solution?
3. Originally Posted by Ackbeet
Does the system, as you have reduced it, have at least one solution?
I don't think it does... I tried using Row Operations to turn the 5 and the 4 into zeroes, but I couldn't come up with a way to do that, so, no, I don't think it has a solution. If there is no solution, it means that b is NOT a linear combination of a1, a2, and a3, right?
4. Well, you've done Gaussian elimination. Presumably, if you were going to find a solution, you'd do back substitution. What do you get when you try that?
5. Originally Posted by Ackbeet
Well, you've done Gaussian elimination. Presumably, if you were going to find a solution, you'd do back substitution. What do you get when you try that?
Well, the system of equations I would have from the reduced matrix would be
x1 + 5(x3) = 2
x2 + 4(x3) = 3
right? ... So,
x1 = 2 - 5(x3)
x2 = 3 - 4(x3)
Unless I'm missing something, I don't think you can solve it by using substitution, since there are two equations and three variables.
6. With any linear system, the number of solutions is either zero, one, or infinite. Because the system is underdetermined (more variables than equations), you can rule out the system having one solution. Do you think this system has zero solutions, or infinitely many solutions?
7. Originally Posted by Ackbeet
With any linear system, the number of solutions is either zero, one, or infinite. Because the system is underdetermined (more variables than equations), you can rule out the system having one solution. Do you think this system has zero solutions, or infinitely many solutions?
Infinitely many because the last equation in the system would be, based on the reduced matrix,
0(x1) + 0(x2) + 0(x3) = 0, and any number, when plugged in for x1, x2, and/or x3 would satisfy this equation.
8. Your conclusion is correct, but not because of the correct reason. The system has infinitely many solutions because in your reduction down to
x1 = 2 - 5(x3)
x2 = 3 - 4(x3),
you can have any value of x3, and x1 and x2 will be determined. The system would have no solutions if you had a system like this after row reduction:
$\displaystyle \left[\begin{matrix}1 &0 &3\\ 0 &1 &4\\ 0 & 0 &0\end{matrix}\left|\begin{matrix}3\\ 5\\ 2\end{matrix}\right]$
The last row there is a contradiction, because if you multiply every variable by zero, you're going to get zero, which is not equal to 2.
In any case, the important thing in your case is that there is a solution. Getting back to the original question, what does this tell you?
9. Originally Posted by Ackbeet
Your conclusion is correct, but not because of the correct reason. The system has infinitely many solutions because in your reduction down to
x1 = 2 - 5(x3)
x2 = 3 - 4(x3),
you can have any value of x3, and x1 and x2 will be determined. The system would have no solutions if you had a system like this after row reduction:
$\displaystyle \left[\begin{matrix}1 &0 &3\\ 0 &1 &4\\ 0 & 0 &0\end{matrix}\left|\begin{matrix}3\\ 5\\ 2\end{matrix}\right]$
The last row there is a contradiction, because if you multiply every variable by zero, you're going to get zero, which is not equal to 2.
In any case, the important thing in your case is that there is a solution. Getting back to the original question, what does this tell you?
Since there is a solution, then that means that b is a linear combination of the 3 matrices listed.
How would I write that out, though? In the example my professor gave us, we ended up with something like x1 = 3, x2 = 4, which implied that 3(a1) + 4(a2) = b, and so b was a linear combination. But since this has infinite solutions, would I just write "infinite solutions - b is a linear combination" ?
10. Well, I'd gather together all the info you have. You started with
(x1)(a1) + (x2)(a2) + (x3)(a3) = b.
You got it down to
x1 = 2 - 5(x3)
x2 = 3 - 4(x3).
I would pick a value for x3. Find x1 and x2, and then simply rewrite
(x1)(a1) + (x2)(a2) + (x3)(a3) = b.
Make sense?
11. Originally Posted by Ackbeet
Well, I'd gather together all the info you have. You started with
(x1)(a1) + (x2)(a2) + (x3)(a3) = b.
You got it down to
x1 = 2 - 5(x3)
x2 = 3 - 4(x3).
I would pick a value for x3. Find x1 and x2, and then simply rewrite
(x1)(a1) + (x2)(a2) + (x3)(a3) = b.
Make sense?
Yes, that makes sense. Thank you so much for walking me through it! It helped a lot
12. Good! You're welcome. Have a good one! |
# Balls Weight Calculator
## Metal Balls/Hemisphere Weight and Packing Volume,
12-06-2018· Metal Balls and Metal Hemispheres Weight And Packing Volume Calculator 1.When you start a project need to implement our metal balls for hanging or special weight factor.which you wanna figure out metal balls weight and choosing the best thickness for your metal balls project. we publish a calculator for our metal balls weight counting.Bal-tec - Ball Weight and Density,$\text"Weight" = {4 ⋅ 3.1416 ⋅ {3/2}^3} ⋅ 0.409$ $\text"Weight" = 5.782 \text"pounds"$ Notice that only one inch increase in diameter caused a 4 pound increase in weight. This three inch diameter ball is more than triple the weight of the two inch diameter ball.Metal Weight Calculator | Stainless Shapes, Inc.,Metal Weight Calculator. Our all new metal calculator will find the accurate length and weight of the most popular metals requested for from Stainless Shapes, if you have any questions, make sure to contact us immediately.
## Fasteners Weight Calculator | Bolts Weight Calculator,
YOU ARE HERE : Home : Products : Specification: Fasteners Weight Calculator bolt weight calculator excel, bolt weight calculator in mm, bolt weight calculator metric, bolt weight calculator software, stud bolt weight calculator, hex bolt weight calculator, foundation bolt weight calculator in kg, bolt weight calculator, stud bolt weight calculator in kg, ms bolt weight calculator in kg, bolt,Volume and Weight Calculator - CustomPart.Net,Volume and Weight Calculator Calculate the volume and weight, in English or Metric units, for over 40 geometric shapes and a variety of materials. Select from such metals as Aluminum, Cast iron, or Steel, or from such thermoplastics as ABS, Nylon, or Polycarbonate.Metal Weight Calculator - Round, Square, Rectange,,Metal Weight Calculator: Enter value, select units and click on calculate. Result will be displayed. Enter your values: Material: Shape: Quantity,
## Metal Weight Calculator | Stainless Shapes, Inc.
Metal Weight Calculator. Our all new metal calculator will find the accurate length and weight of the most popular metals requested for from Stainless Shapes, if you have any questions, make sure to contact us immediately.Volume and Weight Calculator - CustomPart.Net,Volume and Weight Calculator Calculate the volume and weight, in English or Metric units, for over 40 geometric shapes and a variety of materials. Select from such metals as Aluminum, Cast iron, or Steel, or from such thermoplastics as ABS, Nylon, or Polycarbonate.Steel Weight Calculator - Stainless, Aluminum, Nickel,,Purchase Order Calculator. Metal, shape, weight, size, and number of pieces.
## Crosby Overhaul Weight Calculator
THE OVERHAUL WEIGHT CALCULATOR IS A MATHEMATICAL TOOL ONLY. ALL CALCULATIONS AND DECISIONS MUST BE THE RESPONSIBILITY OF A QUALIFIED PERSON. Calculations are for 6 x 19 IWRC. Units: US Standard Units (in,ft,lbs) Metric Standard Units (mm,m,kg) Boom Length: ft.,Lead Weight Calculator | Ultraray,Lead Weight Calculator STEP 1: Choose casting or extrusion shape Rectangle, Sheet, Cube Parrallelogram Trapezoid Triangle Pipe Cylinder, Road, Bar, […]Pizza Dough Calculator | Städler Made,The sizing of dough balls for Neapolitan and American pizza is pretty similar. Pizzas come in small, medium and large sizes. The calculator is set for medium pizzas. If you’re planning to make your pizzas a different size, you can change this setting. The figure below shows the right dough ball weight for the different pizza sizes.
## Molecular Weight Calculator (Molar Mass) - Lenntech
Molecular Weight Calculator This online calculator you can use for computing the average molecular weight (MW) of molecules by entering the chemical formulas (for example C3H4OH(COOH)3 ). Or you can choose by one of the next two option-lists, which contains a series of common organic compounds (including their chemical formula) and all the elements.Pipe Weight Calculator - wCalcul,stainless steel pipe weight calculator in mm; stainless tube; Currently 1 people doing weight calculation pipe Today was calculated weight of 479 pipes 4.565.506 page views 3 person on site right now Today it was so far 3.678 viewsIdeal Weight Calculator,Ideal Weight Calculator. The Ideal Weight Calculator computes ideal bodyweight (IBW) ranges based on height, gender, and age. The idea of finding the IBW using a formula has been sought after by many experts for a long time. Currently, there persist several popular formulas, and our Ideal Weight Calculator provides their results for side-to-side comparisons.
## Calorie Calculator
This calorie calculator estimates the number of calories needed each day to maintain, lose, or gain weight. It provides results for the number of necessary calories based on a one or two-pound gain or loss per week. Learn more about different kinds of calories and their effects, and explore many other free calculators addressing the topics of finance, math, health, and fitness, among others.Metal Weight Calculator - Steel Weight, Sheet metal, Pipe,,The best and most advanced online metal weight calculation site in the field. On our site you can calculate the weight of various materials on metal as close to real as possible. Calculation procedures are very easy and flexible.Crosby Overhaul Weight Calculator,THE OVERHAUL WEIGHT CALCULATOR IS A MATHEMATICAL TOOL ONLY. ALL CALCULATIONS AND DECISIONS MUST BE THE RESPONSIBILITY OF A QUALIFIED PERSON. Calculations are for 6 x 19 IWRC. Units: US Standard Units (in,ft,lbs) Metric Standard Units (mm,m,kg) Boom Length: ft.,
## Molecular Weight Calculator (Molar Mass) - Lenntech
Molecular Weight Calculator This online calculator you can use for computing the average molecular weight (MW) of molecules by entering the chemical formulas (for example C3H4OH(COOH)3 ). Or you can choose by one of the next two option-lists, which contains a series of common organic compounds (including their chemical formula) and all the elements.Stainless Plate Weight Calculator | Metal Weight,,Online metal weight calculator which helps to calculate the weight of Stainless Plate metal. Stainless Plate Weight Calculation Material Alloy Steel Aluminum Beryllium Brass Bronze Cast Iron Columbium Copper Copper Alloys Gold Lead Magnesium Molybdenum Nickel Plastic Silver Stainless Steel Tantalum Titanium Tungsten Zinc ZirconiumHow to figure out the weight for Dough Balls of Different,,28-03-2018· Re: How to figure out the weight for Dough Balls of Different Sizes? « Reply #16 on: March 28, 2018, 09:40:01 AM » GumbaWill the dough calculator you mentioned comes up with 456 grams for a .08 TF 16 inch pie unless you entered in a Bowl Residue Compensation %.
## Pizza Dough Calculator | Städler Made
The sizing of dough balls for Neapolitan and American pizza is pretty similar. Pizzas come in small, medium and large sizes. The calculator is set for medium pizzas. If you’re planning to make your pizzas a different size, you can change this setting. The figure below shows the right dough ball weight for the different pizza sizes.Pipe Weight Calculator - wCalcul,stainless steel pipe weight calculator in mm; stainless tube; Currently 1 people doing weight calculation pipe Today was calculated weight of 479 pipes 4.565.506 page views 3 person on site right now Today it was so far 3.678 viewsGlass Weight Calculator | Glass Weight Estimator | Glass,,Glass weight estimator tool. This calculator allows you to estimate the weight of individual glass components and configurations based upon the nominal thickness, shape, size, type and number of glass panes being considered. Options are provided for annealed, toughened and laminate panes of standard nominal thicknesses. |
Uniform Distribution definition, formula and applications
•
•
•
•
•
•
•
•
•
Uniform Distribution
There are various continuous probability distributions such as uniform distribution, normal distribution, exponential distribution, gamma distribution, beta distribution, weibul distribution, cauchy distribution ect. Uniform distribution is a univariate continuous probability distribution with two parameter a and b.
A continuous random variable x is said to have a uniform distribution if the probability function is defined by-
where, a and b are the two parameters of the distribution such that -∞<=a<=b<=∞.
Properties
There are some impotant properties of uniform distribution-
• The mean of uniform distribution is
• The median of uniform distribution is .
• The variance of uniform distribution is (b-a)∧2/12.
• The mode of uniform distribution is any value of.
• The skewness of uniform distribution is 0.
• The kurtosis of uniform distribution is .
Special characteristics of Uniform distribution
Some special characteristics of uniform distribution are given below-
• The probability of this distribution is same for equal intervals in any part of the distribution.
• The probability of uniform distribution depends on the length of the intervals, not on its position.
• The pdf of the uniform distribution over the interval [0,1] is defined by f(x)=1.
• Moreover, uniform distribution can be defined in a infinite number of ways. |
### A Deep Multicolor Survey V: The M Dwarf Luminosity Function
Martini, P. and Osmer, P.S. 1998, AJ, 116, 2513
We present a study of M dwarfs discovered in a large area, multicolor survey. We employ a combination of morphological and color criteria to select M dwarfs to a limiting magnitude in V of 22, the deepest such ground-based survey for M dwarfs to date. We solve for the vertical disk stellar density law and use the resulting parameters to derive the M dwarf luminosity and mass functions from this sample. We find the stellar luminosity function peaks at M_V \sim 12 and declines thereafter. Our derived mass function for stars with M < 0.6 M_sun is inconsistent with a Salpeter function at the 3 sigma level; instead, we find the mass function is relatively flat for 0.6 M_sun > M > 0.1 M_sun.
[
Publication List | |
© 2015-2019 Jacob Ström, Kalle Åström, and Tomas Akenine-Möller Forum
# Chapter 2: Vectors
One of the most important and fundamental concepts in linear algebra is the vector. Luckily, vectors are all around us, but they are, in general, not visible. The common ways to introduce a vector is either to begin with the strict mathematical definition, or to discuss examples of vectors, such as velocities, forces, acceleration, etc. For a more intuitive and hopefully faster understanding of this important concept, this chapter instead begins with an interactive demonstration and a clear visualization of what a vector can be. In this case, a ball's velocity, which consists of a direction (where the ball is going) and a speed (how fast it is going there), is shown in Interactive Illustration 2.1.
Interactive Illustration 2.1: This little breakout game shows the concept of a vector. Play along for an interactive introduction. Control the paddle with left/right keys, or touch/swipe.
Interactive Illustration 2.1: This little breakout game shows the concept of a vector. Play along for an interactive introduction. Control the paddle with left/right keys, or touch/swipe.
In this book, we denote points by capital italic letters, e.g., $A$, $B$, and $Q$. For most of the presentation in the early chapters, we will use two- and three-dimensional points, and some occasional one-dimensional points. We start with a definition of a vector.
Definition 2.1: Vector
Let $A$ and $B$ be two points. A directed line segment from $A$ to $B$ is denoted by:
$$\overrightarrow{AB}.$$ (2.1)
This directed line segment constitutes a vector. If you can move the line segment to another line segment with the same direction and length, they constitute the same vector.
$A$
$B$
$\overrightarrow{AB}$
$C$
$D$
$\overrightarrow{CD} = \vc{v}$
For instance, the two line segments $\overrightarrow{AB}$ and $\overrightarrow{CD}$ in Interactive Illustration 2.2 constitute the same vector as can be seen when pushing the "forward" button.
We say that $\overrightarrow{AB}$ is a vector and that
$$\overrightarrow{AB} = \overrightarrow{CD}.$$ (2.2)
A shorter notation for vectors is to use a single bold face characters, such as $\vc{v}$. As is shown in the illustration, $\vc{v} = \overrightarrow{AB} = \overrightarrow{CD}$. Some books make a difference between directed line segments and vectors, and reserve the short hand variant $\vc{v}$ for true vectors and the longer $\overrightarrow{AB}$ for directed line segments. While this may be mathematically more stringent, this difference is ignored for the purposes of this book, and we use vectors and directed line segments as one and the same thing.
We also use the terms tail point and tip point of a vector when this is convenient, where the tip point is where the arrowhead is, and the tail point is the other end.
A vector is completely defined by its
1. direction, and
2. its length
Note that a starting position of a vector is missing from the list above. As long as the direction and length is not changed, it is possible to move it around and have it start in any location. This is illustrated in Interactive Illustration 2.3.
Interactive Illustration 2.3: A vector does not have a specific starting position. This vector is drawn at a certain position, but even when it is moved to start somewhere else, it is still the same vector. Click/touch Forward to move the vector.
Interactive Illustration 2.3: A vector does not have a specific starting position. This vector is drawn at a certain position, but even when it is moved to start somewhere else, it is still the same vector. Click/touch Forward to move the vector.
$\vc{u}$
$\vc{u}$
$\vc{u}$
$\vc{u}$
$\vc{u}$
$\vc{u}$
$\vc{u}$
$\vc{u}$
$\vc{u}$
$\vc{u}$
$\vc{u}$
$\vc{u}$
$\vc{u}$
$\vc{u}$
$\vc{u}$
The length of a vector is denoted by $\ln{\overrightarrow{AB}}$, or in shorthand by $\ln{\vc{v}}$.
$$\text{length of vector:}\spc\spc \ln{\vc{v}}$$ (2.3)
The length of a vector is a scalar, which just means that it is a regular number, such as $7.5$. The term scalar is used to emphasize that it is just a number and not a vector or a point. Exactly how the length of a vector can be calculated will be deferred to Chapter 3.
Note that the order of the points is important, i.e., if you change the order of $A$ and $B$, another vector, $\overrightarrow{BA}$, is obtained. It has opposite direction, but the same length, i.e., $\ln{\overrightarrow{AB}} = \ln{\overrightarrow{BA}}$. Even $\overrightarrow{AA}$ is a vector, which is called the zero vector, as shown in the definition below.
Definition 2.2: Zero Vector
The zero vector is denoted by $\vc{0}$, and can be created using a directed line segment using the same point twice, i.e., $\vc{0}=\overrightarrow{AA}$. Note that $\ln{\vc{0}}=0$, i.e., the length of the zero vector is zero.
Two vectors, $\vc{u}$ and $\vc{v}$, are parallel if they have the same direction or opposite directions, but not necessarily the same lengths. This is shown to the right in Figure 2.4. Note how you can change the vectors in the figure, some can be changed by grabbing the tip, others by grabbing the tail. The notation
$$\vc{u}\, ||\, \vc{v}$$ (2.4)
means that $\vc{u}$ is parallel to $\vc{v}$. The zero vector $\vc{0}$ is said to be parallel to all other vectors. Next, we will present how two vectors can be added to form a new vector, and then follows scalar vector multiplication in Section 2.3.
There are two fundamental vector operations in linear algebra, namely, vector addition and scalar vector multiplication, where the latter is sometimes called vector scaling. Most of the mathematics in this book build upon these two operations, and even the most complex operations often lead back to addition and scaling. Vector scaling is described in Section 2.3, while vector addition is described here. Luckily, both vector addition and vector scaling behave as we would expect them to.
The sum, $\vc{u}+\vc{v}$, of two vectors, $\vc{u}$ and $\vc{v}$, is constructed by placing $\vc{u}$, at some arbitrary location, and then placing $\vc{v}$ such that $\vc{v}$'s tail point coincides with $\vc{u}$'s tip point, and $\vc{u}+\vc{v}$ is the vector that starts at $\vc{u}$'s tail point, and ends at $\vc{v}$'s tip point.
Exactly how the vector sum is constructed is shown in Interactive Illustration 2.5 below.
Interactive Illustration 2.5: Two vectors, $\vc{u}$ and $\vc{v}$, are shown. These will be added to form the vector sum $\vc{u} + \vc{v}$. Note that the vectors can be changed as usual by dragging their tips. Click/press Forward to continue to the next stage of the illustration.
Interactive Illustration 2.5: Two vectors, $\hid{\vc{u}}$ and $\hid{\vc{v}}$, are shown. These will be added to form the vector sum $\hid{\vc{u} + \vc{v}}$. Note that the vectors can be changed as usual by dragging their tips. Click/press Forward to continue to the next stage of the illustration.
$\vc{u}+\vc{v}$
$\vc{v}$
$\vc{v}$
$\vc{v}$
$\vc{u}$
$\vc{u}$
$\vc{u}$
$\vc{v}$
So far, we have only illustrated the vector addition in the plane, i.e., in two dimensions. However, it can also be illustrated in three dimensions. This is done below in Interactive Illustration 2.6. Remember that you can rotate the figure by moving the mouse while right clicking or by using a two-finger swipe.
Interactive Illustration 2.6: Two vectors, $\vc{u}$ and $\vc{v}$, are shown. These will be added to form the vector sum, $\vc{u} + \vc{v}$. Note that the vectors can be changed as usual by dragging their tip points. If you do so, you will move the points in the plane of the screen. Click/press Forward to continue to the next stage of the illustration.
Interactive Illustration 2.6: In this final stage, we have added some dashed support lines to make it easier to see the spatial relationships. Recall that you can press the right mouse button, keep it pressed, and move the mouse to see the vector addition from another view point. For tablets, the same maneuver is done by swiping with two fingers. Note that by changing the point of view like this, you can verify that $\hid{\vc{u}}$, $\hid{\vc{v}}$, and $\hid{\vc{u}+\vc{v}}$ all lie in the same plane. Try also to move the vectors so that the projected points no longer end up on a straight line.
$\vc{u}$
$\vc{u}+\vc{v}$
$\vc{v}$
$\vc{v}$
$\vc{v}$
As we saw in the Breakout Game 2.1, the speed of the ball was increased by 50% after a while. This is an example of vector scaling, where the velocity vector simply was scaled by a factor of $1.5$. However, a scaling factor can be negative as well, and this is all summarized in the definition below, and instead of the term vector scaling, we also use the term scalar vector multiplication.
Definition 2.4: Scalar Vector Multiplication
When a vector, $\vc{v}$, is multiplied by a scalar, $k$, the vector $k\vc{v}$ is obtained, which is parallel to $\vc{v}$ and its length is $\abs{k}\,\ln{v}$. The direction of $k\vc{v}$ is opposite $\vc{v}$ if $k$ is negative, and otherwise it has the same direction as $\vc{v}$. If $k=0$, then $k\vc{v}=\vc{0}$.
A corollary to this is that if the two vectors $\vc{u}$ and $\vc{v}$ satisfy $\vc{u} = k \vc{v}$ for some scalar $k$, then $\vc{u}$ and $\vc{v}$ are parallel.
Scalar vector multiplication is shown in Interactive Illustration 2.7 below. The reader is encouraged to play around with the illustration.
Interactive Illustration 2.7: Here, we show how a vector, $\vc{v}$, can be multiplied by a scalar, $k$, so that $k\vc{v}$ is generated. The reader can move the vector, $\vc{v}$, and also manipulate the value of $k$ by dragging the slider below the illustration. Note what happens to $k\vc{v}$ when $k$ is negative. As an exercise, try to make the tip point of $\vc{v}$ meet coincide with the tip point of $k\vc{v}$.
Interactive Illustration 2.7: Here, we show how a vector, $\hid{\vc{v}}$, can be multiplied by a scalar, $\hid{k}$, so that $\hid{k\vc{v}}$ is generated. The reader can move the vector, $\hid{\vc{v}}$, and also manipulate the value of $\hid{k}$ by dragging the slider below the illustration. Note what happens to $\hid{k\vc{v}}$ when $\hid{k}$ is negative. As an exercise, try to make the tip point of $\hid{\vc{v}}$ meet coincide with the tip point of $\hid{k\vc{v}}$.
$k=$
$\vc{v}$
$k\vc{v}$
Now that we can both add vectors, and scale vectors by a real number, it is rather straightforward to subtract two vectors as well. This is shown in the following example.
Example 2.1: Vector Subtraction
Note that by using vector addition (Definition 2.3) and scalar vector multiplication (Definition 2.4) by $-1$, we can subtract one vector, $\vc{v}$, from another, $\vc{u}$ according to
$$\underbrace{\vc{u} + (\underbrace{-1\vc{v}}_{\text{scaling}})}_{\text{addition}} = \vc{u}-\vc{v},$$ (2.5)
where we have introduced the shorthand notation, $\vc{u}-\vc{v}$, for the expression to the left of the equal sign. Vector subtraction is illustrated below.
Interactive Illustration 2.8: Vector subtraction, $\vc{u}-\vc{v}$, is illustrated here. First, only the two vectors, $\vc{u}$ and $\vc{v}$, are shown.
Interactive Illustration 2.8: Finally, we see that $\hid{\vc{u}-\vc{v}}$ is the vector from $\hid{\vc{v}}$'s tip point to $\hid{\vc{u}}$'s tip point. The reader can move the red ($\hid{\vc{u}}$) and green ($\hid{\vc{v}}$) vectors, and as an exercise, try out what happens if one of $\hid{\vc{u}}$ or $\hid{\vc{v}}$ is set to the zero vector, and also try setting $\hid{\vc{u}=\vc{v}}$.
$\vc{u}$
$\vc{v}$
$-\vc{v}$
$-\vc{v}$
$-\vc{v}$
$\vc{u}-\vc{v}$
$\vc{u}-\vc{v}$
$\vc{u}-\vc{v}$
Example 2.2: Box
In this example, we will see how a box can be created by using three vectors that all make a right angle with each other.
Interactive Illustration 2.9: In this example, we have one red, one green, and one blue vector. These all make right angles with each other, and they are constrained to be like that. The length of the vectors can be changed interactively, though. In the following, we will show how a box can be built from these vectors.
Interactive Illustration 2.9: In this example, we have one red, one green, and one blue vector. These all make right angles with each other, and they are constrained to be like that. The length of the vectors can be changed interactively, though. In the following, we will show how a box can be built from these vectors.
There are a number of different rules for using both vector addition and scalar vector multiplication. This is the topic of the next section.
Using vectors in calculations with vector addition and scalar vector multiplication is fairly straightforward. They behave as we might expect them to. However, rules such as $\vc{u}+(\vc{v}+\vc{w})=(\vc{u}+\vc{v})+\vc{w}$, must be proved. The rules for vector arithmetic are summarized in Theorem 2.1.
Theorem 2.1: Properties of Vector Arithmetic
Assuming that $\vc{u}$, $\vc{v}$, and $\vc{w}$ are vectors of the same size, and that $k$ and $l$ are scalars, then the following rules hold:
\begin{gather} \begin{array}{llr} (i) & \vc{u}+\vc{v} = \vc{v}+\vc{u} & \spc\text{(commutativity)} \\ (ii) & (\vc{u}+\vc{v})+\vc{w} = \vc{u}+(\vc{v}+\vc{w}) & \spc\text{(associativity)} \\ (iii) & \vc{v}+\vc{0} = \vc{v} & \spc\text{(zero existence)} \\ (iv) & \vc{v}+ (-\vc{v}) = \vc{0} & \spc\text{(negative vector existence)} \\ (v) & k(l\vc{v}) = (kl)\vc{v} & \spc\text{(associativity)}\\ (vi) & 1\vc{v} = \vc{v} & \spc\text{(multiplicative one)} \\ (vii) & 0\vc{v} = \vc{0} & \spc\text{(multiplicative zero)} \\ (viii) & k\vc{0} = \vc{0} & \spc\text{(multiplicative zero vector)} \\ (ix) & k(\vc{u}+\vc{v}) = k\vc{u}+k\vc{v} & \spc\text{(distributivity 1)} \\ (x) & (k+l)\vc{v} = k\vc{v}+l\vc{v} & \spc\text{(distributivity 2)} \\ \end{array} \end{gather} (2.6)
While most (or all) of the rules above feel very natural and intuitive, they must be proved nevertheless. The reader is encouraged to look at the proofs, and especially at the interactive illustrations, which can increase the feeling and intuition for many of the rules.
$(i)$ This rule (commutativity) has already been proved in the figure in Definition 2.3. Another way to prove this rule is shown below in Interactive Illustration 2.10.
Interactive Illustration 2.10: This interactive illustration shows commutativity of vector addition. This means that $\vc{u}+\vc{v}=\vc{v}+\vc{u}$. Click/touch Forward to continue.
Interactive Illustration 2.10: Finally, we also show the other translated vectors both to the left and right. As can be seen, the resulting vector sum is the same, regardless of the order of the operands. Recall that the vectors can be moved around.
$\textcolor{#aa0000}{\vc{u}}$
$\textcolor{#aa0000}{\vc{u}}$
$\textcolor{#aa0000}{\vc{u}}$
$\textcolor{#aa0000}{\vc{u}}$
$\textcolor{#00aa00}{\vc{v}}$
$\textcolor{#00aa00}{\vc{v}}$
$\textcolor{#00aa00}{\vc{v}}$
$\textcolor{#00aa00}{\vc{v}}$
$\textcolor{#0000aa}{\vc{u}+\vc{v}}$
$\textcolor{#0000aa}{\vc{v}+\vc{u}}$
$(ii)$ The proof of this rule (associativity) is shown in Interactive Illustration 2.11.
Interactive Illustration 2.11: Consider the three vectors, $\vc{u}$, $\vc{v}$, and $\vc{w}$. Since vectors do not have a specific starting point, we have arranged them so that $\vc{v}$ starts where $\vc{u}$ ends, and $\vc{w}$ starts where $\vc{v}$ ends.
Interactive Illustration 2.11: Consider the three vectors, $\hid{\vc{u}}$, $\hid{\vc{v}}$, and $\hid{\vc{w}}$. Since vectors do not have a specific starting point, we have arranged them so that $\hid{\vc{v}}$ starts where $\hid{\vc{u}}$ ends, and $\hid{\vc{w}}$ starts where $\hid{\vc{v}}$ ends.
$\textcolor{#aa0000}{\vc{u}}$
$\textcolor{#00aa00}{\vc{v}}$
$\textcolor{#0000aa}{\vc{w}}$
$\textcolor{#00aaaa}{\vc{v}+\vc{w}}$
$\vc{u}+(\vc{v}+\vc{w})$
$\textcolor{#aaaa00}{\vc{u}+\vc{v}}$
$(\vc{u}+\vc{v})+\vc{w}$
$\vc{u}+\vc{v}+\vc{w}$
$(iii)$ Since, there zero vector has length zero, the definition of vector addition gives us that $\vc{v}+\vc{0}$ is the same as $\vc{v}$.
$(iv)$ Since $-\vc{v}$ is exactly $\vc{v}$ with opposite direction, the sum will be zero.
$(v)$ The approach to this proof is to start with the left hand side of the equal sign, and find out what the direction and length is. Then the same is done for the right hand side of the equal sign. The details are left as an exercise for the reader.
$(vi)$ Since $1$ is a positive number, we know that $1\vc{v}$ and $\vc{v}$ have the same direction, so it only remains to control that they have the same length. The length of the left hand side of the equal sign is $\abs{1}\,\ln{\vc{v}}=\vc{v}$, and for the right hand side it is $\ln{\vc{v}}$, i.e., they are the same, which proves the rule.
$(vii)$ and $(viii)$ First, note the difference between these. In $(vii)$, we have a scalar zero times $\vc{v}$ equals a zero vector, and in $(viii)$, we have a scalar, $k$ times a zero vector, which is equals the zero vector. $(vii)$ is actually defined in Definition 2.4, so only $(viii)$ needs to be proved. The length of both $k\vc{0}$ and $\vc{0}$ are zero, which proves the rule.
$(ix)$ First, we refer the reader to Interactive Illustration 2.12. Be sure to press Forward until the last stage of the illustration. The formal proof (of distributivity) follows after the illustration.
Interactive Illustration 2.12: This illustration helps show the rule $k(\vc{u}+\vc{v}) = k\vc{u}+k\vc{v}$. First, we simply show two vectors, $\vc{u}$ and $\vc{v}$, and their sum, $\vc{u}+\vc{v}$. Press Forward to continue.
Interactive Illustration 2.12: Finally the vector $\hid{k(\vc{u}+\vc{v})}$ is shown as well. Note that the smaller triangle $\hid{\triangle O A_1 B_1}$ is similar to the larger triangle $\hid{\triangle O A_2 B_2}$ since it has the same angle at $\hid{A_1}$ and $\hid{A_2}$ and since the two edges $\hid{\vc{v}}$ and $\hid{\vc{u}}$ is proportional to $\hid{k\vc{v}}$ and $\hid{k\vc{u}}$. Thus it is clear that by adding $\hid{k\vc{u}}$ and $\hid{k\vc{v}}$, we reach $\hid{k\vc{u}+k\vc{v}}$, which is the same as $\hid{k(\vc{u}+\vc{v})}$. Recall that you can press the right mouse button, keep it pressed, and move the mouse to see the vector addition from another perspective. For tablets, the same maneuver is done by swiping with two fingers.
$k=$
$\vc{u}$
$\vc{v}$
$\vc{u} + \vc{v}$
$k\vc{u}$
$k\vc{v}$
$k(\vc{u} + \vc{v})$
$O$
$A_1$
$A_2$
$B_1$
$B_2$
It follows from scalar vector multiplication (Definition 2.4) that
\begin{align} \ln{k\vc{u}} &= \abs{k}\,\ln{\vc{u}}, \\ \ln{k\vc{v}} &= \abs{k}\,\ln{\vc{v}}, \end{align} (2.7)
and if $k>0$ then $\vc{u}$ and $k\vc{u}$ have the same direction, and so do $\vc{v}$ and $k\vc{v}$. On the other hand, if $k<0$ then $\vc{u}$ and $k\vc{u}$ have opposite directions, and so do $\vc{v}$ and $k\vc{v}$. This implies that the triangle, formed by the following set of three points: $\{O$, $O+\vc{u}$, $O+\vc{u}+\vc{v}\}$, is similar to the triangle formed by $\{O$, $O+k\vc{u}$, $O+k\vc{u}+k\vc{v}\}$. That those two triangles are similar also means that $O$, $O+\vc{u}+\vc{v}$ and $O+k\vc{u}+k\vc{v}$ lie on a straight line. Furthermore, since the triangles are similar, and due to (2.7), we know that
$$\ln{k(\vc{u}+\vc{v})} = \abs{k}\,\ln{\vc{u}+\vc{v}}.$$ (2.8)
If $k>0$ then $k(\vc{u}+\vc{v})$ has the same direction as $\vc{u}+\vc{v}$, and if $k<0$ then the have opposite directions. Hence, it follows that $k(\vc{u}+\vc{v}) = k\vc{u}+k\vc{v}$. The rule is trivially true if $k=0$, which concludes the proof of this rule.
$(x)$ This is somewhat similar to $(ix)$, but simpler, and so is left for the reader.
This concludes the proofs for Theorem 2.1.
$\square$
Example 2.3: Vector Addition of Three Vectors
To get an understanding of how vector addition works for more than two vectors, Interactive Illustration 2.13 below shows the addition of three vectors. Recall that vector addition is associative, so we may write $\vc{u}+\vc{v}+\vc{w}$ without any parenthesis.
Interactive Illustration 2.13: This interactive illustration shows the addition of three vectors, shown to the left. The vectors can be moved around as usual, and the interactive illustrated may be advanced by clicking/touching Forward.
Interactive Illustration 2.13: Finally, the black vector is shown, which is the sum of the three vectors. Recall that the vectors to the left can be moved around by clicking close to the tip of the vectors and moving the mouse while pressing. As an exercise, try to make the three vectors sum to zero so a triangle appears to the right.
$\vc{u}$
$\vc{v}$
$\vc{w}$
It is often useful to be able to calculate the middle point of two points. This is described in the following theorem.
Theorem 2.2: The Middle Point Formula
$A$
$B$
$M$
$O$
Assume that $M$ is the middle point of the line segment that goes between $A$ and $B$ as shown in the illustration to the right. Assume $O$ is another point. The vector $\overrightarrow{OM}$, i.e., from $O$ to $M$, can be written as
$$\overrightarrow{OM} = \frac{1}{2}(\overrightarrow{OA} + \overrightarrow{OB}).$$ (2.9)
The vector $\overrightarrow{OM}$ is the sum of $\overrightarrow{OA}$ and $\overrightarrow{AM}$
$$\overrightarrow{OM} = \overrightarrow{OA} + \overrightarrow{AM}.$$ (2.10)
Another way of saying this is that if you start in $O$ and want to end up in $M$, you can either go first from $O$ to $A$ and then from $A$ to $M$ (right hand side of the equation) or go directly from $O$ to $M$ (left hand side of equation).
By going via $B$ instead we get
$$\overrightarrow{OM} = \overrightarrow{OB} + \overrightarrow{BM}.$$ (2.11)
Summing these two equations together gives
$$2\overrightarrow{OM} = \overrightarrow{OA} + \overrightarrow{OB} + \overrightarrow{AM} + \overrightarrow{BM}.$$ (2.12)
Since $\overrightarrow{BM}$ is equally long as $\overrightarrow{AM}$ but has opposite direction it must hold that $\overrightarrow{BM} = -\overrightarrow{AM}$. Inserting this in the equation above and dividing by two gives
$$\overrightarrow{OM} = \frac{1}{2}(\overrightarrow{OA} + \overrightarrow{OB}).$$ (2.13)
Sometimes you also see the shorter notation
$$M = \frac{1}{2}(A + B)$$ (2.14)
$\square$
Example 2.4: Sierpinski Triangles using the Middle Point Theorem
We will now show how the middle point formula can be used to generate a geometrical figure called the Sierpinski triangle. Assume we have a triangle consisting of three points, $A$, $B$, and $C$. Using Theorem 2.2, the midpoints of each edge can now be computed. These midpoints can be connected to form four new triangles, where the center triangle is empty. If this process is repeated for each new non-empty triangle, then we arrive at the Sierpinski triangle. This is shown in Interactive Illustration 2.15 below.
Interactive Illustration 2.15: In this illustration, we will show how a geometrical figure, called the Sierpinski triangle, is constructed. We start with three (moveable) points, $A$, $B$, and $C$, connected to form a triangle.
Interactive Illustration 2.15: In this illustration, we will show how a geometrical figure, called the Sierpinski triangle, is constructed. We start with three (moveable) points, $\hid{A}$, $\hid{B}$, and $\hid{C}$, connected to form a triangle.
$A$
$B$
$C$
$M_1 = \frac{1}{2}(A + B)$
$M_2 = \frac{1}{2}(B + C)$
$M_3 = \frac{1}{2}(A + C)$
Example 2.5: Center of Mass Formula
$A$
$B$
$C$
$A'$
$O$
$M$
$2$
$1$
$B'$
$2$
$1$
$M$
In the triangle $ABC$, the point $A'$ is on the mipoint between $B$ and $C$. The line segment from $A$ to $A'$ is called the median of $A$. Let $M$ be the point that divides the median of $A$ in a proportion 2 to 1, shown in the illustration to the right.
The center of mass formula states that
$$\pvec{OM} = \frac{1}{3}(\pvec{OA} + \pvec{OB} + \pvec{OC}).$$ (2.15)
This formula can be proved as follows. We can go from $O$ to $M$ either directly or via $A$, hence
$$\pvec{OM} = \pvec{OA} + \pvec{AM}.$$ (2.16)
It is also possible to arrive at $M$ via $A’$. This gives
$$\pvec{OM} = \pvec{OA'} + \pvec{A'M}.$$ (2.17)
One of the assumptions is that $\pvec{A'M}$ is half the length of $\pvec{AM}$ and of opposite direction, and therefore, it holds that $\pvec{A'M} = -\frac{1}{2}\pvec{AM}$. Inserting this in the equation above gives
$$\pvec{OM} = \pvec{OA'} -\frac{1}{2}\pvec{AM}.$$ (2.18)
We can eliminate $\pvec{AM}$ by adding Equation (2.16) to two times Equation (2.18)
$$\pvec{3OM} = \pvec{OA} + 2\pvec{OA'}.$$ (2.19)
Since $A'$ is the mid point of $B$ and $C$, we known from the middle point formula that $\pvec{OA'} = \frac{1}{2}(\pvec{OB} + \pvec{OC})$. Inserting this in the equation above gives
$$\pvec{3OM} = \pvec{OA} + 2\frac{1}{2}(\pvec{OB} + \pvec{OC}),$$ (2.20)
which simplifies as
$$\pvec{OM} = \frac{1}{3}(\pvec{OA} + \pvec{OB} + \pvec{OC}).$$ (2.21)
This completes the proof.
Note that since the formula is symmetric, it works just as well on the median to $B$. The same point, $M$, will divide the median going from $B$ to $B'$ in proportion $2:1$. This can be seen by pressing forward in the interactive illustration.
This point is also called the center of mass. If the triangle was cut out of cardboard, this is the point where it would balance of the top of a pencil, which exaplains the name "center for mass". Also, if equal point masses were placed in the points $A$, $B$ and $C$, $M$ is the point where they would balance out.
Most of you are probably familiar with the concept of a coordinate system, such as in the map in the first step of Interactive Illustration 2.17 below. In this first step, the axes are perpendicular and of equal length, but this is a special case, as can be seen by pressing Forward. This section will describe the general coordinate systems, and the interaction between vectors, bases, and coordinates.
Interactive Illustration 2.17: A map with an ordinary coordinate system. The center of the map is marked as the origin, and we show the $x$-axis as a horizontal arrow, and the $y$-axis as a vertical arrow. These axes are locally similar to the longitude and latitude, but not on a global scale since the earth is not flat. Press/click Forward to continue to the next stage.
Interactive Illustration 2.17: A map with an ordinary coordinate system. The center of the map is marked as the origin, and we show the $\hid{x}$-axis as a horizontal arrow, and the $\hid{y}$-axis as a vertical arrow. These axes are locally similar to the longitude and latitude, but not on a global scale since the earth is not flat. Press/click Forward to continue to the next stage.
$x$
$y$
$x$
$y$
$x$
$y$
$x$
$y$
$\vc{e}_1$
$\vc{e}_2$
Next, we define how coordinates can be described in one, two, and three dimensions. This is done with the following set of theorems.
Theorem 2.3: Coordinate in One Dimension
$\vc{e}$
$\textcolor{#aa0000}{\vc{v}}$
Let $\vc{e}$ be a non-zero vector on a straight line. For each vector, $\vc{v}$, on the line, there is only one number, $x$, such that
$$\vc{v} = x \vc{e}.$$ (2.22)
(The vector, $\vc{v}$, in the figure to the right can be moved around.)
If $\vc{e}$ and $\vc{v}$ have the same direction, then choose $x=\ln{\vc{v}}/\ln{\vc{e}}$, and if $\vc{e}$ and $\vc{v}$ have opposite directions, then set $x=-\ln{\vc{v}}/\ln{\vc{e}}$. Finally, if $\vc{v}=\vc{0}$, then $x=0$. It follows from the definition of scalar vector multiplication, $x\vc{e}$, that $x$ is the only number that fulfils $\vc{v} = x\vc{e}$.
$\square$
Note that we say that $\vc{e}$ is a basis vector, and that $x$ is the coordinate for $\vc{v}$ in the basis of $\{\vc{e}\}$.
So far, this is not very exciting, but the next step makes this much more useful.
Theorem 2.4: Coordinates in Two Dimensions
$\vc{e}_1$
$\textcolor{#aa0000}{\vc{v}}$
$\vc{e}_2$
$O$
$P_1$
$P_2$
Let $\vc{e}_1$ and $\vc{e}_2$ be two non-parallel vectors (which both lie in a plane). For every vector, $\vc{v}$, in this plane, there is a single coordinate pair, $(x,y)$, such that
$$\vc{v} = x\vc{e}_1 + y\vc{e}_2.$$ (2.23)
(The vectors $\vc{v}$, $\vc{e}_1$, and $\vc{e}_2$ can be moved around in the figure.)
For this proof, we will use Interactive Illustration 2.19. As can be seen, $P_1$ was obtained by drawing a line, parallel to $\vc{e}_2$, from the tip point of $\vc{v}$ until it collides with the line going through $\vc{e}_1$. Similarly, $P_2$ is obtained by drawing a line, parallel to $\vc{e}_1$, from the tip point of $\vc{v}$ until it collides with the line going through $\vc{e}_2$. It is clear that
$$\vc{v} = \overrightarrow{O P_1} + \overrightarrow{O P_2}.$$ (2.24)
Now, let us introduce $\vc{u} = \overrightarrow{O P_1}$ and $\vc{w} = \overrightarrow{O P_2}$. Using Theorem 2.3 on $\vc{u}$ with $\vc{e}_1$ as basis vector, we get $\vc{u} = x \vc{e}_1$. Similarly, for $\vc{w}$ with $\vc{e}_2$ as basis vector, $\vc{w} = y \vc{e}_2$ is obtained. Hence, the vector $\vc{v}$ can be expressed as
$$\vc{v} = \vc{u} + \vc{w} = x \vc{e}_1 + y \vc{e}_2.$$ (2.25)
It remains to prove that $x$ and $y$ are unique in the representation of $\vc{v}$. If the representation would not be unique, then another coordinate pair, $(x',y')$, would exist such that
$$\vc{v} = x' \vc{e}_1 + y' \vc{e}_2.$$ (2.26)
Combining (2.25) and (2.26), we get
\begin{gather} x \vc{e}_1 + y \vc{e}_2= x' \vc{e}_1 + y' \vc{e}_2 \\ \Longleftrightarrow \\ (x-x') \vc{e}_1 = (y'-y) \vc{e}_2. \end{gather} (2.27)
The conclusion from this is that if another representation, $(x',y')$, would exist, then $\vc{e}_1$ and $\vc{e}_2$ would be parallel (bottom row in (2.27)). For instance, if $x'$ is different from $x$, then $(x-x') \neq 0$ and both sides can be divided by $(x-x')$, which gives us
\begin{gather} \vc{e}_1 = \frac{(y'-y)}{(x-x')} \vc{e}_2, \end{gather} (2.28)
which can be expressed as $\vc{e}_1 = k \vc{e}_2$ with $k = \frac{(y'-y)}{(x-x')}$. However, according to the corollary to Definition 2.4 this means that $\vc{e}_1$ and $\vc{e}_2$ are parallel, contradicting the assumption in Theorem 2.4. The same reasoning applies if $y' - y \neq 0$. Hence, we have shown that there is only one unique pair, $(x,y)$, for each vector, $\vc{v}$, by using a proof by contradiction.
$\square$
Note that we say that $\vc{e}_1$ and $\vc{e}_2$ are basis vectors, and that $x$ and $y$ are the coordinates for $\vc{v}$ in the basis of $\{\vc{e}_1,\vc{e}_2\}$.
Next, we will extend this to three dimensions as well.
Theorem 2.5: Coordinates in Three Dimensions
Let $\vc{e}_1$, $\vc{e}_2$, and $\vc{e}_3$ be three non-zero basis vectors, and that there is no plane that is parallel with all three vectors. For every vector, $\vc{v}$, in the three-dimensional space, there is a single coordinate triplet, $(x,y,z)$, such that
$$\vc{v} = x\vc{e}_1 + y\vc{e}_2 + z\vc{e}_3.$$ (2.29)
Start by placing all the vectors $\vc{v}$, $\vc{e}_1$, $\vc{e}_2$ and $\vc{e}_3$ so that they start in the origin according to Interactive Illustration 2.20. Let $\pi_{12}$ be the plane through $O$ that contains $\vc{e}_1$ and $\vc{e}_2$, and let $P$ be the point at the tip of $\vc{v}$, i.e., $\vc{v} = \overrightarrow{OP}$.
$O$
$P_{12}$
$\vc{e}_1$
$\vc{e}_2$
$\vc{e}_3$
$P$
$\pi_{12}$
Interactive Illustration 2.20: Starting with the three vectors $\vc{e}_1$, $\vc{e}_2$ and $\vc{e}_3$, all placed with their tails in a point $O$.
Interactive Illustration 2.20: In summary, going from $\hid{O}$ to $\hid{P}$ can be done by first going to $\hid{P_{12}}$: $\hid{\overrightarrow{OP} = \overrightarrow{OP_{12}} + \overrightarrow{P_{12}P}}$. These two terms can in turn be exchanged using $\hid{\overrightarrow{OP_{12}} = x\vc{e}_1 + y \vc{e}_2}$ and $\hid{\overrightarrow{P_{12}P} = z\vc{e}_3}$. Thus $\hid{\overrightarrow{OP} = \overrightarrow{OP_{12}} + \overrightarrow{P_{12}P} = x\vc{e}_1 + y \vc{e}_2 + z\vc{e}_3}$.
Draw a line from $P$ parallel with $\vc{e}_3$ that intersects the plane $\pi_{12}$ in the point $P_{12}$. It is now clear that we can write $\vc{v}$ as the sum
$$\vc{v} = \overrightarrow{OP} = \overrightarrow{OP_{12}} + \overrightarrow{P_{12}P}.$$ (2.30)
However, according to the Theorem 2.4 (two dimensions), $\overrightarrow{OP_{12}}$ can be written as $\overrightarrow{OP_{12}} = x \vc{e}_1 + y \vc{e}_2$, and according to Theorem 2.3 (on dimension), $\overrightarrow{P_{12}P}$ can be written as $\overrightarrow{P_{12}P} = z \vc{e}_3$. Hence, there exist three numbers $x$, $y$ and $z$, such that
$$\vc{v} = x \vc{e}_1 + y \vc{e}_2 + z \vc{e}_3.$$ (2.31)
We must now prove that $x$, $y$ and $z$ are the only numbers for which this is possible. Assume that there is another set of numbers, $x'$, $y'$ $z'$, that also generates the same vector, $\vc{v}$, that is
$$\vc{v} = x' \vc{e}_1 + y' \vc{e}_2 + z' \vc{e}_3.$$ (2.32)
Combining (2.31) and (2.32) gives
$$x \vc{e}_1 + y \vc{e}_2 + z \vc{e}_3 = x' \vc{e}_1 + y' \vc{e}_2 + z' \vc{e}_3.$$ (2.33)
This can be rearranged to
$$(x-x') \vc{e}_1 + (y-y') \vc{e}_2 + (z-z') \vc{e}_3 = 0.$$ (2.34)
If the new set ($x'$, $y'$, $z'$) is to be different from the other ($x$, $y$, $z$), at least one of the terms must now be different from zero. Assume it is $(x-x')$ (or else, rename the vectors and scalars so that it becomes this term). This means that we can divide by $(x-x')$ to obtain
$$\vc{e}_1 = - \frac{(y-y')}{(x-x')} \vc{e}_2 - \frac{(z-z')}{(x-x')}\vc{e}_3,$$ (2.35)
which also can be expressed as
$$\vc{e}_1 = \alpha \vc{e}_2 + \beta \vc{e}_3,$$ (2.36)
where $\alpha = - \frac{(y-y')}{(x-x')}$ and $\beta = - \frac{(z-z')}{(x-x')}$. However, this means that $\vc{e}_1$ lies in the same plane as $\vc{e}_2$ and $\vc{e}_3$ (see Theorem 2.4), which contradicts the assumption that there is no plane that is parallel to $\vc{e}_1$, $\vc{e}_2$ and $\vc{e}_3$. Thus there cannot exist any other set of values, $x'$, $y'$, $z'$, that satisfies the equation and therefore the proof is complete.
$\square$
Similarly as before, we say that $\vc{e}_1$, $\vc{e}_2$, and $\vc{e}_3$ are basis vectors, and that $x$, $y$, and $z$ are the coordinates for $\vc{v}$ in the basis of $\{\vc{e}_1,\vc{e}_2,\vc{e}_3\}$.
Now, we can finally see where the vector representation using coordinates comes from. If we assume that a certain basis, $\{\vc{e}_1, \vc{e}_2, \vc{e}_3\}$, is used, then we can write a three-dimensional vector, $\vc{v}$, as
$$\vc{v} = v_x \vc{e}_1 + v_y \vc{e}_2 + v_z \vc{e}_3= \begin{pmatrix} v_x \\ v_y \\ v_z \end{pmatrix},$$ (2.37)
where we have used $v_x$ instead of $x$, $v_y$ instead of $y$, and $v_z$ instead of $z$. This is to make it simpler to mix several different vectors, and still be able to access the individual components. Note that the right-hand expression shows the vector as a column of three numbers, the $x$-coordinate on top, the $y$-coordinate in the middle, and the $z$-coordinate at the bottom. This is such an important notation, so we have summarized it into the following definition:
Definition 2.5: Column Vector Notation
Given a basis, a column vector, $\vc{v}$, in $n$ dimensions (we have used $n\in [1,2,3]$) is a column of $n$ scalar values. These scalar components, sometimes called vector elements, of the vector can either be numbered, i.e., $v_1$, $v_2$, and $v_3$, or we can use $x$, $y$, and $z$ as subscripts when that is more convenient. The notation is:
\begin{gather} \underbrace{ \vc{u} = \begin{pmatrix} u_x \end{pmatrix} = \begin{pmatrix} u_1 \end{pmatrix}}_{\text{1D vector}}, \spc\spc \underbrace{ \vc{v} = \begin{pmatrix} v_x \\ v_y \end{pmatrix} = \begin{pmatrix} v_1 \\ v_2 \end{pmatrix}}_{\text{2D vector}}, \spc\spc \\ \underbrace{ \vc{w} = \begin{pmatrix} w_x \\ w_y \\ w_z \end{pmatrix} = \begin{pmatrix} w_1 \\ w_2 \\ w_3 \end{pmatrix}}_{\text{3D vector}}, \end{gather} (2.38)
where $\vc{u} = u_x \vc{e}_1$, $\vc{v} = v_x \vc{e}_1 + v_y \vc{e}_2$, and $\vc{w} = w_x \vc{e}_1 + w_y \vc{e}_2 + w_z \vc{e}_3$.
We also use a more compact way of writing vectors, which is convenient when writing vectors in text, for example: $\vc{w} = \bigl(w_1,w_2,w_3\bigr)$, which means the same as above (notice the commas between the vector elements).
Column vectors, per the definition above, is the type of vectors that we use mostly throughout this book. Hence, when we say "vector", we mean a "column vector". However, there is also another type of vectors, namely, row vectors. As can be deduced from the name, it is simply a row of scalar values, instead of a column of scalar values. An example of a row vector is:
$$\bigl(1\spc 2\spc 5 \bigr).$$ (2.39)
Any vector, be it row or column, can be transposed, which means that a row vector turns into a column vector, and a column vector turns into a row vector. The notation for a transposed vector is: $\vc{v}^T$. An example is shown below:
$$\vc{v} = \begin{pmatrix} 1\\ 2\\ 5 \end{pmatrix}, \spc\spc\spc \vc{v}^\T = \bigl(1\spc 2\spc 5 \bigr).$$ (2.40)
We summarize the transposing of a vector in the following definition:
Definition 2.6: Transpose of a Vector
The transpose of a vector, $\vc{v}$, is denoted by $\vc{v}^\T$, and turns a column vector into a row vector, and a row vector into a column vector. The order of the vector components is preserved.
Note that with this definition, we can transpose a vector twice, and get back the same vector, i.e., $\bigl(\vc{v}^T\bigr)^T = \vc{v}$. Next, we also summarize the row vector definition below:
Definition 2.7: Row Vector Notation
A row vector is expressed as a transposed column vector, as shown below:
$$\underbrace{ \vc{v}^\T = \bigl( v_x \spc v_y \bigr) }_{\text{2D row vector}}, \spc \spc \underbrace{ \vc{w}^\T = \bigl( w_x \spc w_y \spc w_z \bigr) }_{\text{3D row vector}}.$$ (2.41)
Notice that a row vector never has any commas between the vector elements. This is reserved for the compact notation for column vector (see Definition 2.5).
Now, let us assume that we have two vectors, $\vc{u}$ and $\vc{v}$, in the same basis, i.e.,
$$\vc{u} = u_x \vc{e}_1 + u_y \vc{e}_2 + u_z \vc{e}_3= \begin{pmatrix} u_x \\ u_y \\ u_z \end{pmatrix} \spc\spc \text{and} \spc\spc \vc{v} = v_x \vc{e}_1 + v_y \vc{e}_2 + v_z \vc{e}_3= \begin{pmatrix} v_x \\ v_y \\ v_z \end{pmatrix}.$$ (2.42)
The addition, $\vc{u}+\vc{v}$, becomes:
\begin{align} \vc{u}+\vc{v} &= u_x \vc{e}_1 + u_y \vc{e}_2 + u_z \vc{e}_3 + v_x \vc{e}_1 + v_y \vc{e}_2 + v_z \vc{e}_3 \\ &=(u_x+v_x)\vc{e}_1 + (u_y+v_y)\vc{e}_2 + (u_z+v_z)\vc{e}_3 \\ &= \begin{pmatrix} u_x+v_x \\ u_y+v_y \\ u_z+v_z \end{pmatrix}. \end{align} (2.43)
As can be seen, the vector addition boils down to simple component-wise scalar addition. For scalar vector multiplication, $k\vc{v}$, we have:
\begin{align} k\vc{v} &= k (v_x \vc{e}_1 + v_y \vc{e}_2 + v_z \vc{e}_3) \\ &= (k v_x) \vc{e}_1 + (k v_y) \vc{e}_2 + (k v_z) \vc{e}_3\\ &= \begin{pmatrix} k v_x \\ k v_y \\ k v_z \end{pmatrix}, \end{align} (2.44)
and here we see that each component of the vector is multiplied by $k$.
Example 2.6: Vector Addition and Scalar Multiplication using Coordinates
Assume we have the following vectors in the same basis:
$$\vc{u} = \left( \begin{array}{r} 3 \\ -4 \\ 7 \end{array} \right), \spc \spc \vc{v} = \left( \begin{array}{r} 1 \\ 2 \\ 5 \end{array} \right), \spc \spc \text{and} \spc \spc \vc{w} = \left( \begin{array}{r} 2 \\ -1 \\ 6 \end{array} \right),$$ (2.45)
and that we now want to evaluate $\vc{u} + \vc{v} - 2\vc{w}$. As we have seen above, vector addition is simply a matter of adding the vector elements:
$$\vc{u}+\vc{v} = \left( \begin{array}{r} 3 \\ -4 \\ 7 \end{array} \right) + \left( \begin{array}{r} 1 \\ 2 \\ 5 \end{array} \right) = \left( \begin{array}{r} 3+1 \\ -4+2 \\ 7+5 \end{array} \right) = \left( \begin{array}{r} 4 \\ -2 \\ 12 \end{array} \right).$$ (2.46)
We can also scale a vector by a scalar value, e.g., $k=2$:
$$2\vc{w} = 2 \left( \begin{array}{r} 2 \\ -1 \\ 6 \end{array} \right) = \left( \begin{array}{c} 2\cdot 2 \\ 2\cdot (-1) \\ 2\cdot 6 \end{array} \right) = \left( \begin{array}{r} 4 \\ -2 \\ 12 \end{array} \right),$$ (2.47)
which means that $\vc{u} + \vc{v} - 2\vc{w} = \vc{0}$.
In many calculations, one uses a simple and intuitive basis called the standard basis, which is defined as follows.
Definition 2.8: Standard Basis
The standard basis in this book is as follows for two and three dimensions, that is,
\begin{gather} \underbrace{ \vc{e}_1 = \begin{pmatrix} 1 \\ 0 \end{pmatrix},\ \vc{e}_2 = \begin{pmatrix} 0 \\ 1 \end{pmatrix} }_{\mathrm{two-dimensional\ standard\ basis}} \ \ \mathrm{and} \\ \ \\ \underbrace{ \vc{e}_1 = \begin{pmatrix} 1 \\ 0 \\ 0 \end{pmatrix},\ \vc{e}_2 = \begin{pmatrix} 0 \\ 1 \\ 0 \end{pmatrix},\ \vc{e}_3 = \begin{pmatrix} 0 \\ 0 \\ 1 \end{pmatrix}. }_{\mathrm{three-dimensional\ standard\ basis}} \end{gather} (2.48)
In general, for an $n$-dimensional standard basis, the basis vectors, $\vc{e}_i$ have vector elements which are all zeroes, except the $i$:th element, which is a one.
In Chapter 3, we will discuss different types of bases, and we will see that the standard basis is, in fact, an orthonormal basis (Section 3.3).
Example 2.7: Addition in the Standard Basis
In this example, we will illustrate how vector addition is done in the standard basis in order to increase the reader's intuition about addition. See Interactive Illustration 2.21. Recall that the standard basis vector in two dimensions are $\vc{e}_1=(1,0)$ and $\vc{e}_2=(0,1)$.
Interactive Illustration 2.21: Here, two vectors are shown in the standard basis. Click/touch Forward to continue.
Interactive Illustration 2.21: The coordinates of the blue vector are simply the sum of the respective coordinates of the red and green vectors. For example, the $\hid{x}$-coordinate of the blue vector is simply the addition of the $\hid{x}$-coordinates of the red and green vectors. Recall that the vector can be moved around by clicking/touching close to the tip of the red or green vector, and the drag.
$\vc{u}$
$\vc{v}$
$\vc{u}+\vc{v}$
Next, two intuitive examples will be given on the topics of coordinate systems, basis vectors, uniqueness, and coordinates.
Example 2.8: Same Point Expressed In Different Bases
Note that the same point will have different coordinates when different basis vectors are used, as shown in Interactive Illustration 2.22. Note in the illustration that when the basis vectors change, the coordinates change too, but the point stays at the same place all the time.
Interactive Illustration 2.22: Here, we show how the same point, $P$, can be expressed in different coordinate systems. In the first step, we have an ordinary coordinate system where the first basis vector, $\vc{e}_1$ (thick red arrow) and the second basis vector, $\vc{e}_2$ (green thick arrow) make a right angle, and they are of equal length. The coordinates $(2,1)$ mean that if we start from the origin, go two steps along $\vc{e}_1$ and one step along $\vc{e}_2$, with the result that we end up in $P$.
Interactive Illustration 2.22: Here is another example, where the coordinates for $\hid{P}$ equals $\hid{(3,1)}$. Note that you can move the two basis vectors and $\hid{P}$, while the coordinates will adjust accordingly. Note also that if you place the two basis vectors so that they become almost parallel, then the coordinates start to rise dramatically and erratically. This makes sense, since if the vectors were indeed completely parallel, you would only be able to represent points on the line going from the origin along the first basis vector (or the second, which would be equivalent). If they are slightly different, this small difference must be enhanced by a large number in order to reach $\hid{P}$.
$O$
$P =$
$\vc{e}_1$
$\vc{e}_2$
$\vc{e}_2$
$\vc{e}_2$
Going back to stage two in Interactive Illustration 2.22, it is obvious that adding the two basis vectors together brings us the vector $\overrightarrow{OP}$ exactly, so $\overrightarrow{OP} = 1.0 \vc{e}_1 + 1.0 \vc{e}_2$ must hold. Thus, $(1, 1)$ is a valid coordinate pair for the point $P$. However, one may ask whether there are any other coordinates that will also describe the point $P$, now that the basis vectors no longer need to make a right angle. The answer to this is no, as we have seen in the proof to Theorem 2.4. A bit more intuition about why this is so, can be obtained from Interactive Illustration 2.23.
Interactive Illustration 2.23: In this interactive figure, the fat arrows represent the basis vectors, $\vc{e}_1$ and $\vc{e}_2$. The point, $P$, has the coordinates $(2.5, 1.0)$ since, if you start in the origin, you need to go $2.5$ steps along $\vc{e}_1$ (thin red arrow) and one step along $\vc{e}_2$ (thin green arrow) to get to $P$. But could other coordinates also work? Press/click Forward to advance the illustration.
Interactive Illustration 2.23: In this interactive figure, the fat arrows represent the basis vectors, $\hid{\vc{e}_1}$ and $\hid{\vc{e}_2}$. The point, $\hid{P}$, has the coordinates $\hid{(2.5, 1.0)}$ since, if you start in the origin, you need to go $\hid{2.5}$ steps along $\hid{\vc{e}_1}$ (thin red arrow) and one step along $\hid{\vc{e}_2}$ (thin green arrow) to get to $\hid{P}$. But could other coordinates also work? Press/click Forward to advance the illustration.
In this chapter, we have introduced the notion of vectors. The definitions of a vector in Section 2.5 and the basic operations, such as vector addition (Definition 2.3) and multiplication with a scalar (Definition 2.4), have been defined geometrically. We then showed that these two operations fulfill a number of properties in Theorem 2.1. This definition works for $\R^1$, $\R^2$, and $\R^3$. For higher dimensions, it is difficult for us to use the geometric definition. The notion of a two-dimensional and three-dimensional vector is in itself very useful, but geometric vectors are also a stepping stone for understanding general linear spaces or vector spaces. This more general theory is extremely useful for modeling and understanding problems when we have more than three unknown parameters. The reader may want to skip the following section, and revisit it later depending on his/her needs.
In this section, we will first give a definition of a $\R^n$.
Definition 2.9: Real Coordinate Space
The vector space $\R^n$ is defined as $n$-tuples $\vc{u} = (u_1, u_2, \ldots, u_n)$, where each $u_i$ is a real number. It is a vector space over the real numbers $\R$, where vector addition $\vc{u}+\vc{v}$ is defined as $\vc{u}+\vc{v} = (u_1+v_1, u_2+v_2, \ldots, u_n+v_n)$ and scalar-vector multiplications is defined as $k\vc{v} = (k v_1, k v_2, \ldots, k v_n)$, where $k\in \R$.
Note that using these definitions for vector addition and scalar-vector multiplication, the properties of Theorem 2.1 all hold.
Example 2.10:
Let $\vc{u}=(1,2,3,4,5)$ and $\vc{v}=(5,4,3,2,1)$ be two vectors in $\R^5$. What is $\vc{u}+\vc{v}$, $3\vc{u}$ and $3\vc{u}+3\vc{v}$?
$$\vc{u}+\vc{v} = (1,2,3,4,5) + (5,4,3,2,1) = (1+5,2+4,3+3,4+2,5+1) = (6,6,6,6,6)$$ (2.49)
$$3\vc{u}= 3(1,2,3,4,5) = (3 \cdot 1,3 \cdot 2,3 \cdot 3,3 \cdot 4,3 \cdot 5) = (3,6,9,12,15)$$ (2.50)
$$3\vc{u}+3\vc{v} = 3 (\vc{u}+\vc{v}) = 3 (6,6,6,6,6) = (18,18,18,18,18)$$ (2.51)
In this last step, the result ($\vc{u}+\vc{v}$) from Equation (2.49) was used.
Definition 2.10: Basis in $\R^n$
A basis in $\R^n$ is a set of vectors $\{\vc{e}_1, \ldots, \vc{e}_m\}$ in so that for every vector $\vc{u}\in\R^n$, there is a unique set of coordinates $(u_1, \ldots, u_m)$ so that
$$\vc{u} = \sum_{i=1}^m u_i \vc{e}_i.$$ (2.52)
Example 2.11: Canonical Basis in $\R^n$
The canonical basis in $\R^n$ is the following set of basis vectors
$$\begin{cases} \begin{array}{ll} \vc{e}_1 &= (1, 0, \ldots, 0), \\ \vc{e}_2 &= (0, 1, \ldots, 0), \\ \vdots & \\ \vc{e}_n &= (0, 0, \ldots, 1). \end{array} \end{cases}$$ (2.53)
#### 2.6.1 The General Definition
We will now present an abstract definition of a vector space. Then we will show that any finite-dimensional vector space over $\R$ is in fact 'the same as' $\R^n$ that we defined earlier in Definition 2.9.
Definition 2.11: Vector space
A vector space consists of a set $V$ of objects (called vectors) and a field $F$, together with a definition of vector addition and multiplication of a scalar with a vector, in such a way that the properties of Theorem 2.1 holds.
A vector space consists of a set $V$ of objects. As we shall see in one example, the vector space is the set of images of size $m \times n$ pixels. In another example, the vector space is a set of polynomials up to degree $5$. The elements of the field $F$ are called scalars. A field is a set of objects where addition, subtraction, multiplication and division is well defined and follows the usual properties. Most often the field used is the set of real numbers $\R$ or the set of complex numbers $\mathbb{C}$, but one could use more exotic fields, such as integers modulu a prime number, e.g., $\mathbb{Z}_3$.
Example 2.12: Polynomials up to degree 2
Polynomials in $x$ up to degree 2 with real coefficients is a vector space over $\R$. Here if $u = u_0 + u_1 x + u_2 x^2$ and $v = v_0 + v_1 x + v_2 x^2$, where each coefficient $u_i$ and $v_i$ is a real number. Here vector addition $u+v$ is defined as $u+v = (u_0+v_0) + (u_1+v_1) x + (u_2+v_2) x^2$ and scalar-vector multiplications is defined as $ku = k u_0 + k u_1 x + k u_2 x^2$.
Example 2.13: Gray-scale images
Gray-scale images, where each pixel intensity is a real number is a vector space over $\R$. Here if the pixel of the image $u$ at position $(i,j)$ has intensity $u_{i,j}$ and similarily if the pixel of the image $v$ at position $(i,j)$ has intensity $v_{i,j}$, then vector addition is defined as an image $u+v$ where the intensity of the pixel at position $(i,j)$ is $u_{i,j}+v_{i,j}$. The scalar-vector multiplications is defined as the image $ku$ where the pixel at position $(i,j)$ has intensity $k u_{i,j}$,
Example 2.14: $\mathbb{Z}_3$ Coordinate Space
The vector space $\mathbb{Z}_3^n$ is defined as n-tuples $\vc{u} = (u_1, u_2, \ldots, u_n)$, where each $u_i$ is one of the integers $0$, $1$, or $2$. It is a vector space over the integers $0$, $1$, and $2$. Vector addition $\vc{u}+\vc{v}$ is defined as $\vc{u}+\vc{v} = (u_1+v_1, u_2+v_2, \ldots, u_n+v_n)$ and scalar-vector multiplications is defined as $k\vc{v} = (k u_1, k u_2, \ldots, k u_n)$. Here the multiplications and additions of two scalars are done moduli 3.
Definition 2.12: Basis in Vector Space
A basis in a finite dimensional vector space $V$ over $F$ is a set of vectors $\{\vc{e}_1, \ldots, \vc{e}_m\}$ in so that for every vector $\vc{u} \in V$, there is a unique set of coordinates $(u_1, \ldots, u_m)$ with $u_i \in F$, so that
$$\vc{u} = \sum_{i=1}^m u_i \vc{e}_i.$$ (2.54)
The number, $m$, of basis vectors is said to be the dimension of the vector space. We will later show that this is a well-defined number for a given vector space.
Theorem 2.6: Vector in Vector Space
Let $V$ be a vector space over $\R^m$ and let $\{\vc{e}_1, \ldots, \vc{e}_m\}$ be a basis. Then each vector $\vc{u}$ can be identified with its coordinates $(u_1, \ldots, u_m)$.
In this way, one can loosely say that each $m$-dimensional vector space over $\R$ is 'the same thing' as $\R^m$.
The vector concept has been treated in this chapter, and the vector addition and scalar vector multiplication operations have been introduced. In addition, we have seen that these operations behave pretty much as expected, i.e., similar to how we calculate with real numbers. To make the vectors a bit more practical, the basis concept was introduced, and we saw how a, e.g., three-dimensional vector can be represented by three scalar numbers with respect to a certain basis. Finally, we also introduced the concept of a higher-dimensional vector space $\R^n$ very briefly. In Chapter 3, the dot product operation will be introduced. It is useful when measuring length and angles.
Chapter 1: Introduction (previous) Chapter 3: The Dot Product (next) |
• ### Announcements
#### Archived
This topic is now archived and is closed to further replies.
# OpenGL SpotLights OpenGL
## Recommended Posts
glLightfv(GL_LIGHT0, GL_SPOT_DIRECTION, LightDirection); glLighti(GL_LIGHT0, GL_SPOT_CUTOFF, 90); glLightf(GL_LIGHT0, GL_CONSTANT_ATTENUATION, 1); glLightf(GL_LIGHT0, GL_LINEAR_ATTENUATION, 0); glLightf(GL_LIGHT0, GL_QUADRATIC_ATTENUATION, 0); Which values to choose to get a "normal" spotlight? Gandalf the White
##### Share on other sites
How to create a decent spotlight in OpenGL? All my objects turn dark. I´m sure about the direction but not so sure about GL_SPOT_EXPONENT and the attenuation params. I set the angle to 90 degree. Nothing happend. Darker then coffe.
Gandalf the White
##### Share on other sites
read the OpenGL programming guide (aka red book) version 2 (about OpenGL1.2)
It must be available online.
I''ll check that these evening I''m just to tired to answer now.
You will get my answer only tomorrow, sorry.
-* Sounds, music and story makes the difference between good and great games *-
##### Share on other sites
Your object might have too few vertices, and your spotlight might not be caught by any of them, thus everything is dark. Mark Kilgaard had an article on Opengl.org on the 16 most common OGL mistakes, and this was one of them.
##### Share on other sites
I don´t think sow. I have 100 objects. Every objects is build by 24 vertices. That makes 2400 vertices.
Gandalf the White
Edited by - Gandalf on June 16, 2000 4:02:34 AM
##### Share on other sites
It´s a hard problem. I don´t think I can solve it this time. I really need help.
In D3DIM everything works fine. I get the result I want: A spotlight moving up and down, eluminating a circle. Everytning else is almost black. But The weird thing is I have tried to set up exactly the same scene in D3DIM and OpenGL. Same lightsource, direction, position, angle. Same number of vertices, same polygon type (triangle strip), texture (also tried wihout textures) and same perspective (nearClipPlane and nearClipPlane is same). Everyhing turns coffe black in OpenGL!
Gandalf the White
Edited by - Gandalf on June 16, 2000 6:26:56 AM
##### Share on other sites
Bad Monkey, I fix it, all thanks to you!
Never play with the GL_PROJECTION matrix.
Gandalf the White
• ## Partner Spotlight
• ### Forum Statistics
• Total Topics
627651
• Total Posts
2978406
• ### Similar Content
• Both functions are available since 3.0, and I'm currently using glMapBuffer(), which works fine.
But, I was wondering if anyone has experienced advantage in using glMapBufferRange(), which allows to specify the range of the mapped buffer. Could this be only a safety measure or does it improve performance?
Note: I'm not asking about glBufferSubData()/glBufferData. Those two are irrelevant in this case.
• By xhcao
Before using void glBindImageTexture( GLuint unit, GLuint texture, GLint level, GLboolean layered, GLint layer, GLenum access, GLenum format), does need to make sure that texture is completeness.
• By cebugdev
hi guys,
are there any books, link online or any other resources that discusses on how to build special effects such as magic, lightning, etc. in OpenGL? i mean, yeah most of them are using particles but im looking for resources specifically on how to manipulate the particles to look like an effect that can be use for games,. i did fire particle before, and I want to learn how to do the other 'magic' as well.
Like are there one book or link(cant find in google) that atleast featured how to make different particle effects in OpenGL (or DirectX)? If there is no one stop shop for it, maybe ill just look for some tips on how to make a particle engine that is flexible enough to enable me to design different effects/magic
let me know if you guys have recommendations.
• By dud3
How do we rotate the camera around x axis 360 degrees, without having the strange effect as in my video below?
Mine behaves exactly the same way spherical coordinates would, I'm using euler angles.
Tried googling, but couldn't find a proper answer, guessing I don't know what exactly to google for, googled 'rotate 360 around x axis', got no proper answers.
References:
Code: https://pastebin.com/Hcshj3FQ
The video shows the difference between blender and my rotation:
• By Defend
I've had a Google around for this but haven't yet found some solid advice. There is a lot of "it depends", but I'm not sure on what.
My question is what's a good rule of thumb to follow when it comes to creating/using VBOs & VAOs? As in, when should I use multiple or when should I not? My understanding so far is that if I need a new VBO, then I need a new VAO. So when it comes to rendering multiple objects I can either:
* make lots of VAO/VBO pairs and flip through them to render different objects, or
* make one big VBO and jump around its memory to render different objects.
I also understand that if I need to render objects with different vertex attributes, then a new VAO is necessary in this case.
If that "it depends" really is quite variable, what's best for a beginner with OpenGL, assuming that better approaches can be learnt later with better understanding?
• 10
• 12
• 22
• 13
• 33 |
$$\require{cancel}$$
Figure $$\PageIndex{1}$$:The motion of an American kestrel through the air can be described by the bird’s displacement, speed, velocity, and acceleration. When it flies in a straight line without any change in direction, its motion is said to be one dimensional. (credit: Vince Maidens, Wikimedia Commons) |
It is currently 23 Mar 2018, 02:04
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# Events & Promotions
###### Events & Promotions in June
Open Detailed Calendar
# If N is the product of all multiples of 3 between 1 and 100
Author Message
TAGS:
### Hide Tags
Math Expert
Joined: 02 Sep 2009
Posts: 44412
Re: If N is the product of all multiples of 3 between 1 and 100 [#permalink]
### Show Tags
22 Aug 2013, 03:53
1
KUDOS
Expert's post
mumbijoh wrote:
Dear Bunuel
I came across this question and i really do not understand it.I read the "Everything about factorial " link but i cant seem to apply what i have read there to this question.
"
once in 15;
once in 30;
once in 45;
once in 60;
twice in 75 (5*5*3);
once in 90;
15=5*3
30=5*6
45=5*9
60=5*12
75=5^2*3
90=5*18
Similar questions to practice:
if-n-is-the-greatest-positive-integer-for-which-2n-is-a-fact-144694.html
what-is-the-largest-power-of-3-contained-in-103525.html
if-n-is-the-product-of-all-positive-integers-less-than-103218.html
if-n-is-the-product-of-integers-from-1-to-20-inclusive-106289.html
if-n-is-the-product-of-all-multiples-of-3-between-1-and-101187.html
if-p-is-the-product-of-integers-from-1-to-30-inclusive-137721.html
what-is-the-greatest-value-of-m-such-that-4-m-is-a-factor-of-105746.html
if-6-y-is-a-factor-of-10-2-what-is-the-greatest-possible-129353.html
if-m-is-the-product-of-all-integers-from-1-to-40-inclusive-108971.html
if-p-is-a-natural-number-and-p-ends-with-y-trailing-zeros-108251.html
if-73-has-16-zeroes-at-the-end-how-many-zeroes-will-147353.html
find-the-number-of-trailing-zeros-in-the-expansion-of-108249.html
how-many-zeros-are-the-end-of-142479.html
how-many-zeros-does-100-end-with-100599.html
find-the-number-of-trailing-zeros-in-the-product-of-108248.html
if-60-is-written-out-as-an-integer-with-how-many-consecuti-97597.html
if-n-is-a-positive-integer-and-10-n-is-a-factor-of-m-what-153375.html
if-d-is-a-positive-integer-and-f-is-the-product-of-the-first-126692.html
Hope it helps.
_________________
Manager
Joined: 15 Apr 2013
Posts: 80
Location: India
Concentration: Finance, General Management
Schools: ISB '15
WE: Account Management (Other)
### Show Tags
25 Aug 2013, 08:14
I have spent most of the day on this topic and questions.
I am still feeling little apprehensive about the time these consume and the difficulty of these questions.
Any suggestions how to gain confidence in this area?
Bunuel wrote:
Math Expert
Joined: 02 Sep 2009
Posts: 44412
### Show Tags
25 Aug 2013, 10:50
pavan2185 wrote:
I have spent most of the day on this topic and questions.
I am still feeling little apprehensive about the time these consume and the difficulty of these questions.
Any suggestions how to gain confidence in this area?
Bunuel wrote:
Can you please tell what do you find most challenging in them? Thank you.
Check other similar questions here: if-n-is-the-product-of-all-multiples-of-3-between-1-and-101187-20.html#p1259389
_________________
Manager
Joined: 15 Apr 2013
Posts: 80
Location: India
Concentration: Finance, General Management
Schools: ISB '15
WE: Account Management (Other)
### Show Tags
25 Aug 2013, 11:23
Bunuel wrote:
Can you please tell what do you find most challenging in them? Thank you.
Check other similar questions here: if-n-is-the-product-of-all-multiples-of-3-between-1-and-101187-20.html#p1259389
I understand the basic concept you explained in the mathbook by Gmatclub and various explanations you have given,but I am finding it difficult to apply on hard questions that involve multiple factorilas and questions that do not specifically give any fcatorial but give a complex product of numbers.
Math Expert
Joined: 02 Sep 2009
Posts: 44412
### Show Tags
25 Aug 2013, 11:29
1
KUDOS
Expert's post
pavan2185 wrote:
Bunuel wrote:
Can you please tell what do you find most challenging in them? Thank you.
Check other similar questions here: if-n-is-the-product-of-all-multiples-of-3-between-1-and-101187-20.html#p1259389
I understand the basic concept you explained in the mathbook by Gmatclub and various explanations you have given,but I am finding it difficult to apply on hard questions that involve multiple factorilas and questions that do not specifically give any fcatorial but give a complex product of numbers.
In that case, I must say that practice should help.
_________________
Manager
Joined: 15 Apr 2013
Posts: 80
Location: India
Concentration: Finance, General Management
Schools: ISB '15
WE: Account Management (Other)
### Show Tags
25 Aug 2013, 11:34
Thank you!!! You have always been helpful.
Yes, I hope that will help. I will try to practise every available question on this topic.he
Thanks for your outstanding work on GC math forums.
Bunuel wrote:
pavan2185 wrote:
Bunuel wrote:
Can you please tell what do you find most challenging in them? Thank you.
Check other similar questions here: if-n-is-the-product-of-all-multiples-of-3-between-1-and-101187-20.html#p1259389
I understand the basic concept you explained in the mathbook by Gmatclub and various explanations you have given,but I am finding it difficult to apply on hard questions that involve multiple factorilas and questions that do not specifically give any fcatorial but give a complex product of numbers.
In that case, I must say that practice should help.
Intern
Status: Onward and upward!
Joined: 09 Apr 2013
Posts: 16
Location: United States
Re: If N is the product of all multiples of 3 between 1 and 100 [#permalink]
### Show Tags
25 Sep 2013, 12:18
Finding the powers of a prime number p, in the n!
The formula is:
Example:
What is the power of 2 in 25!?
^^ Taken from the GMAT Club book...what is the logic behind this question? What are they really asking?
_________________
Kudos if my post was helpful!
MBA Section Director
Status: Back to work...
Affiliations: GMAT Club
Joined: 22 Feb 2012
Posts: 5061
Location: India
City: Pune
GMAT 1: 680 Q49 V34
GPA: 3.4
Re: If N is the product of all multiples of 3 between 1 and 100 [#permalink]
### Show Tags
25 Sep 2013, 13:34
1
KUDOS
Expert's post
TAL010 wrote:
Finding the powers of a prime number p, in the n!
The formula is:
Example:
What is the power of 2 in 25!?
^^ Taken from the GMAT Club book...what is the logic behind this question? What are they really asking?
It means calculating number of instances of P in n!
Consider the simple example ---> what is the power of 3 in 10!
We can find four instances of three in 10! -----> 1 * 2 * 3 * 4 * 5 * (2*3) * 7 * 8 * (3*3) * 10
You can see above we can get four 3s in the expression.
Calculating the number of instances in this way could be tedious in the long expressions. but there is a simple formula to calculate the powers of a particular prime.
the powers of Prime P in n! can be given by $$\frac{n}{p} + \frac{n}{p^2} + \frac{n}{p^3} + .................$$ till the denominator equal to or less than the numerator.
what is the power of 3 in 10! ------> $$\frac{10}{3} + \frac{10}{3^2} = 3 + 1 = 4$$
Analyze how the process works........
We first divided 10 by 1st power of 3 i.e. by 3^1 in order to get all red 3s
Later we divided 10 by 2nd power of 3 i.e. by 3^2 in order to get the leftover 3 (blue)
we can continue in this way by increasing power of P as long as it does not greater than n
Back to the original question..............
What is the power of 2 in 25!? ---------> 25/2 + 25/4 + 25/8 + 25/16 = 12 + 6 + 3 + 1 = 22
Hope that helps!
_________________
Veritas Prep GMAT Instructor
Joined: 16 Oct 2010
Posts: 8001
Location: Pune, India
Re: If N is the product of all multiples of 3 between 1 and 100 [#permalink]
### Show Tags
25 Sep 2013, 21:23
1
KUDOS
Expert's post
TAL010 wrote:
Finding the powers of a prime number p, in the n!
The formula is:
Example:
What is the power of 2 in 25!?
^^ Taken from the GMAT Club book...what is the logic behind this question? What are they really asking?
Check out this post: http://www.veritasprep.com/blog/2011/06 ... actorials/
It answers this question in detail explaining the logic behind it.
_________________
Karishma
Veritas Prep | GMAT Instructor
My Blog
Get started with Veritas Prep GMAT On Demand for \$199
Veritas Prep Reviews
Intern
Joined: 07 Jul 2013
Posts: 7
Re: If N is the product of all multiples of 3 between 1 and 100 [#permalink]
### Show Tags
25 Sep 2013, 23:35
1
KUDOS
We know that for a number to be divisible by 10 must have at least one zero. Let's break the 10 into its prime factors, ie. 5 and 2. Now, we need to find pairs of 2 and 5 in the numerator. Here, 5 is our limiting factor, as it appears less than 2 does. therefore two cont the number of 5s, we must count the 5s in all multiples of 3 between 1 and 100.
15= One 5
30= One 5
45= One 5
60= One 5
75 = Two 5s (5 x 5 x3=75)
90= One 5.\
Director
Joined: 03 Aug 2012
Posts: 874
Concentration: General Management, General Management
GMAT 1: 630 Q47 V29
GMAT 2: 680 Q50 V32
GPA: 3.7
WE: Information Technology (Investment Banking)
Re: If N is the product of all multiples of 3 between 1 and 100 [#permalink]
### Show Tags
30 Sep 2013, 12:07
N= 3*6*9*............99
N= 3 { 1*2*3*4*5*.........*10..........*15....*20...*25...*30.....*33}
The numbers of times 5 can appear in above product is
5=1
10=1
15=1
20=1
25=2
30=1
Total 7
So N/10^m => N/(5^m *2^m)
Thus m=7!
_________________
Rgds,
TGC!
_____________________________________________________________________
I Assisted You => KUDOS Please
_____________________________________________________________________________
Intern
Joined: 17 Apr 2012
Posts: 3
Re: If N is the product of all multiples of 3 between 1 and 100 [#permalink]
### Show Tags
19 Jan 2014, 08:40
Hello all, I am just wondering if the question was a bit different like this :" If N is the product of all multiples of 5 between 1 and 100, what is the greatest integer m for which $$\frac{N}{10^m}$$ is an integer?",
then we will have": $$N = {5^{33}} * {33!}$$. If so, then my answer would be 31. Please allow me to share my take on this.
Basically, I need to find how many 2s and how many 5s are there in N. I use Bunuel's formula and I got the followings:
Number of 2s in 33!: 33/2 + 33/4 + 33/8 + 33/16 + 33/32 = 31. (Since $${5^{33}}$$ does not have any 2)
Number of 5s in 33!: 33/5 + 33/25 = 7. However, I have $${5^{33}}$$, that leaves me with $$5^{40}$$ in N.
Since I need $${10^m}$$, I will need as many 5 AND 2 as possible in N. I have 31 pairs of 2 and 5. So, m = 31.
If someone please confirm my thought, it would be greatly appreciated! Thanks.
Intern
Joined: 14 Feb 2013
Posts: 20
Re: If N is the product of all multiples of 3 between 1 and 100 [#permalink]
### Show Tags
17 Jul 2014, 06:17
Bunuel wrote:
rafi wrote:
If N is the product of all multiples of 3 between 1 and 100, what is the greatest integer m for which $$\frac{N}{10^m}$$ is an integer?
a. 3
b. 6
c. 7
d. 8
e. 10
How do you solve these sort of questions quickly
Thanks
We should determine # of trailing zeros of N=3*6*9*12*15*...*99 (a sequence of 0's of a number, after which no other digits follow).
Since there are at least as many factors 2 in N as factors of 5, then we should count the number of factors of 5 in N and this will be equivalent to the number of factors 10, each of which gives one more trailing zero.
Factors of 5 in N:
once in 15;
once in 30;
once in 45;
once in 60;
twice in 75 (5*5*3);
once in 90;
1+1+1+1+2+1=7 --> N has 7 trailing zeros, so greatest integer $$m$$ for which $$\frac{N}{10^m}$$ is an integer is 7.
Hope it helps.
I found my answer by finding the number of multiples of 3 between 1 and 100 i.e 100/3 = 33.
Then I found the number of trailing zeroes in 33! = 7
so 10^7 can be the maximum for N/10^m to remain an integer.
Am I just lucky or can this also be a method of solving?
Math Expert
Joined: 02 Sep 2009
Posts: 44412
Re: If N is the product of all multiples of 3 between 1 and 100 [#permalink]
### Show Tags
17 Jul 2014, 06:35
hamzakb wrote:
Bunuel wrote:
rafi wrote:
If N is the product of all multiples of 3 between 1 and 100, what is the greatest integer m for which $$\frac{N}{10^m}$$ is an integer?
a. 3
b. 6
c. 7
d. 8
e. 10
How do you solve these sort of questions quickly
Thanks
We should determine # of trailing zeros of N=3*6*9*12*15*...*99 (a sequence of 0's of a number, after which no other digits follow).
Since there are at least as many factors 2 in N as factors of 5, then we should count the number of factors of 5 in N and this will be equivalent to the number of factors 10, each of which gives one more trailing zero.
Factors of 5 in N:
once in 15;
once in 30;
once in 45;
once in 60;
twice in 75 (5*5*3);
once in 90;
1+1+1+1+2+1=7 --> N has 7 trailing zeros, so greatest integer $$m$$ for which $$\frac{N}{10^m}$$ is an integer is 7.
Hope it helps.
I found my answer by finding the number of multiples of 3 between 1 and 100 i.e 100/3 = 33.
Then I found the number of trailing zeroes in 33! = 7
so 10^7 can be the maximum for N/10^m to remain an integer.
Am I just lucky or can this also be a method of solving?
If you solve this way it should be:
N = 3*6*9*12*15*...*99 = 3^33(1*2*3*...*33) = 3^33*33!.
The number of trailing zeros for 33! is 33/5 + 33/25 = 6 + 1 = 7.
Check Trailing Zeros and Power of a number in a factorial questions in our Special Questions Directory.
Hope it helps.
_________________
Math Expert
Joined: 02 Sep 2009
Posts: 44412
If N is the product of all multiples of 3 between 1 and 100 [#permalink]
### Show Tags
17 Jul 2014, 06:37
abmyers wrote:
N = The product of the sequence of 3*6*9*12....*99
N therefore is also equal to 3* (1*2*3*.....*33)
Therefore N = 3* 33!
From here we want to find the exponent number of prime factors, specifically the factors of 10.
10 = 5*2 so we want to find which factors is the restrictive factor
We can ignore the 3, since a factor that is not divisible by 5 or 2 is still not divisible if that number is multiplied by 3.
Therefore:
33/ 2 + 33/4 + 33/8 = 16+8+4 = 28
33/ 5 + 33/25 = 6 + 1 = 7
5 is the restrictive factor.
Here is a similar problem: number-properties-from-gmatprep-84770.html
Red part above are not correct.
Should be: $$N = 3*6*9*12*15*...*99 = 3^{33}(1*2*3*...*33) = 3^{33}*33!$$.
_________________
Intern
Joined: 25 Sep 2013
Posts: 5
GMAT 1: 720 Q50 V37
Re: If N is the product of all multiples of 3 between 1 and 100 [#permalink]
### Show Tags
27 Jul 2014, 06:39
Please correct me if I am wrong.
I simply calculated m (= amount of trailing zeroes) this way:
$$\frac{100}{3*5}+\frac{100}{3* 25}= 6 + 1 = 7$$
dividing by 3*5 and 3*25 one ensures that only multiples of 3 are taken into consideration!
Math Expert
Joined: 02 Sep 2009
Posts: 44412
Re: If N is the product of all multiples of 3 between 1 and 100 [#permalink]
### Show Tags
27 Jul 2014, 15:30
JK13 wrote:
Please correct me if I am wrong.
I simply calculated m (= amount of trailing zeroes) this way:
$$\frac{100}{3*5}+\frac{100}{3* 25}= 6 + 1 = 7$$
dividing by 3*5 and 3*25 one ensures that only multiples of 3 are taken into consideration!
No, that's not correct. N = 3*6*9*12*15*...*99 = 3^33*33!, not 100.
Check Trailing Zeros and Power of a number in a factorial questions in our Special Questions Directory.
Hope it helps.
_________________
Intern
Joined: 25 Sep 2013
Posts: 5
GMAT 1: 720 Q50 V37
Re: If N is the product of all multiples of 3 between 1 and 100 [#permalink]
### Show Tags
28 Jul 2014, 02:06
Quote:
JK13 wrote:
Please correct me if I am wrong.
I simply calculated m (= amount of trailing zeroes) this way:
$$\frac{100}{3*5}+\frac{100}{3* 25}= 6 + 1 = 7$$
dividing by 3*5 and 3*25 one ensures that only multiples of 3 are taken into consideration!
No, that's not correct. N = 3*6*9*12*15*...*99 = 3^33*33!, not 100.
Check Trailing Zeros and Power of a number in a factorial questions in our Special Questions Directory.
Hope it helps.
I understand that N is not 100 in this question, when looking for ALL trailing zeros in a number (which here would ofc be 3^33*33!).
My approach was more based on finding the number of those trailing zeros in 100! that are a result of multiples of 3.
Isn't that then the same amount of trailing zeros, that we want to find in this question?
Another example could be: "Look for the trailing zeros in the product of all multiples of 7 between 1 and 100"
$$\frac{100}{7*5}+\frac{100}{7*25}= 2 + 0 = 2$$
MBA Blogger
Joined: 19 Apr 2014
Posts: 101
Location: India
Concentration: Strategy, Technology
Schools: NTU '19
WE: Analyst (Computer Software)
Re: If N is the product of all multiples of 3 between 1 and 100 [#permalink]
### Show Tags
13 Aug 2014, 05:39
Question seems to be very confusing at first.
I solved it like this:
3*6*9*12*15.........*96*99 = 3(1*2*3*4.........32*33) = 3*33!
Now number of training 0's will be = Number of trailing 0's in 33!
i.e. 33/5+33/25 = 7
_________________
Warm Regards.
Visit My Blog
Intern
Joined: 21 Jul 2014
Posts: 21
Re: If N is the product of all multiples of 3 between 1 and 100 [#permalink]
### Show Tags
18 Aug 2014, 11:11
rafi wrote:
If N is the product of all multiples of 3 between 1 and 100, what is the greatest integer m for which $$\frac{N}{10^m}$$ is an integer?
A. 3
B. 6
C. 7
D. 8
E. 10
How do you solve these sort of questions quickly
Thanks
Hi Bunuel,
I used following approach to solve this question. Please advise if the approach and my assumption are correct :
Product of all multiples of 3 between 1 and 100 = 3*6*9*12...*99
This can be reduced to 3^33 * 33!
As 3^33 will never contribute a 0 to the result (I think. Please confirm), We can just go ahead by calculating the number of trailing zeroes in 33!
# of trailing 0s in 33! = 7 and hence 7 is the answer.
Please confirm if this is the right approach.
Re: If N is the product of all multiples of 3 between 1 and 100 [#permalink] 18 Aug 2014, 11:11
Go to page Previous 1 2 3 4 Next [ 68 posts ]
Display posts from previous: Sort by |
## Found 6,918 Documents (Results 1–100)
100
MathJax
Full Text:
### New radial solutions of strong competitive $$m$$-coupled elliptic system with general form in $$B_1(0)$$. (English)Zbl 07548019
MSC: 35J47 35J91 35A01
Full Text:
### Existence and uniqueness results of positive solution of a class of singular Duffing oscillators. (English)Zbl 07547559
MSC: 34B15 34B16 34B18
Full Text:
### Existence of positive solutions for generalized Laplacian problems with a parameter. (English)Zbl 07546036
MSC: 34B08 34B16 35J25
Full Text:
### Multiple solutions for a nonlocal fractional boundary value problem with fractional integral conditions on infinite interval. (English)Zbl 07545970
MSC: 34B08 34B10 34B40
Full Text:
### Existence of solutions for a class of nonlinear impulsive wave equations. (English)Zbl 07545271
MSC: 47H10 58J20 35L15
Full Text:
### Global asymptotic stability of a scalar delay Nicholson’s blowflies equation in periodic environment. (English)Zbl 07541799
MSC: 34C25 34K13 34K25
Full Text:
### Positive solutions to three classes of non-local fourth-order problems with derivative-dependent nonlinearities. (English)Zbl 07541796
MSC: 34B18 34B10 34B15
Full Text:
Full Text:
### Semipositone fractional boundary value problems with n point fractional integral boundary conditions. (English)Zbl 07541756
MSC: 34B10 34B18 39A10
Full Text:
MSC: 34-XX
Full Text:
### Periodic, permanent, and extinct solutions to population models. (English)Zbl 07540680
MSC: 34Kxx 92Dxx 34-XX
Full Text:
### Existence of positive solutions of a Hammerstein integral equation using the layered compression-expansion fixed point theorem. (English)Zbl 07536068
MSC: 45G10 45M20 47N20
Full Text:
### Nonnegative solutions of an indefinite sublinear Robin problem. II: Local and global exactness results. (English)Zbl 07534023
MSC: 35J62 35B09
Full Text:
### Existence of positive periodic solutions for a class of second-order neutral functional differential equations. (English)Zbl 07533161
MSC: 34B18 34C25
Full Text:
### Existence results for the $$\sigma$$-Hilfer hybrid fractional boundary value problem involving a weighted $$\phi$$-Laplacian operator. (English)Zbl 07531113
MSC: 34B15 34B16 34B18
Full Text:
Full Text:
Full Text:
### Positive solutions of singular $$k_i$$-Hessian systems. (English)Zbl 07523650
MSC: 35B09 35J96
Full Text:
Full Text:
Full Text:
Full Text:
MSC: 82-XX
Full Text:
Full Text:
### Wolff-type integral system including $$m$$ equations. (English)Zbl 07506077
MSC: 45G15 45M20 45M05
Full Text:
### Positive periodic solutions to a second-order singular differential equation with indefinite weights. (English)Zbl 07505332
MSC: 34C25 34B16 34B18
Full Text:
### Traveling wave dynamics for Allen-Cahn equations with strong irreversibility. (English)Zbl 07502496
Reviewer: Kelei Wang (Wuhan)
Full Text:
Full Text:
### Existence of solutions for $$p$$-Kirchhoff problem of Brézis-Nirenberg type with singular terms. (English)Zbl 07500999
MSC: 35J62 35B09 35A01
Full Text:
Full Text:
### Existence and multiplicity of positive unbounded solutions for singular BVPs with the $$\phi$$-Laplacian operator on the half line. (English)Zbl 1482.34073
MSC: 34B16 34B18 47N20
Full Text:
Full Text:
### Existence and Ulam-Hyers stability of positive solutions for a nonlinear model for the antarctic circumpolar current. (English)Zbl 07493685
MSC: 34B18 34D10 86A05
Full Text:
### Existence of a positive solution for a class of Choquard equation with upper critical exponent. (English)Zbl 1485.35239
MSC: 35J91 35B33 35A01
Full Text:
Full Text:
### Existence of a positive solution for a class of non-local elliptic problem with critical growth in $$\mathbb{R}^N$$. (English)Zbl 1485.35218
MSC: 35J62 35A01 35A15
Full Text:
Full Text:
Full Text:
Full Text:
Full Text:
### Positive solutions for nonlocal dispersal equation. (English)Zbl 1484.45011
MSC: 45M20 45K05 45J05
Full Text:
### Branches of positive solutions of a superlinear indefinite problem driven by the one-dimensional curvature operator. (English)Zbl 07478994
MSC: 34B18 34B09 47N20
Full Text:
### Existence and multiplicity of radially symmetric $$k$$-admissible solutions for Dirichlet problem of $$k$$-Hessian equations. (English)Zbl 07478475
MSC: 34B18 35J60 47N20
Full Text:
MSC: 35Qxx
Full Text:
### On a structure of the set of positive solutions to second-order equations with a super-linear non-linearity. (English)Zbl 07472941
MSC: 34C25 34B18
Full Text:
### Towers of bubbles for Yamabe-type equations and for the Brézis-Nirenberg problem in dimensions $$n \geq 7$$. (English)Zbl 07472157
MSC: 35J61 35R01 35A01
Full Text:
Full Text:
### Exponential growth of solution and asymptotic stability results for Hilfer fractional weighted $$p$$-Laplacian initial value problem with Duffing-type oscillator. (English)Zbl 07459027
MSC: 34B15 34B16 34B18
Full Text:
### Sobolev regularity solutions for a class of singular quasilinear ODEs. (English)Zbl 07455139
Reviewer: Minghe Pei (Jilin)
MSC: 34B18 34B16
Full Text:
### An analysis of nonlocal difference equations with finite convolution coefficients. (English)Zbl 07453461
MSC: 39A27 39A13 26A33
Full Text:
### A constructive approach about the existence of positive solutions for Minkowski curvature problems. (English)Zbl 07453214
MSC: 34B09 34B18 65L10
Full Text:
Full Text:
Full Text:
Full Text:
### Further study on existence and uniqueness of positive solution for tensor equations. (English)Zbl 1480.15016
MSC: 15A24 15A69
Full Text:
### Nontrivial solutions for a second order periodic boundary value problem with the nonlinearity dependent on the derivative. (English)Zbl 07443309
MSC: 34B18 47N20
Full Text:
Full Text:
Full Text:
### Global existence and finite time blow-up for a class of fractional $$p$$-Laplacian Kirchhoff type equations with logarithmic nonlinearity. (English)Zbl 07543229
MSC: 39A13 34B18 34A08
Full Text:
Full Text:
### Minimum functional equation and some Pexider-type functional equation on any group. (English)Zbl 07536392
MSC: 20D60 54E35 11T71
Full Text:
Full Text:
MSC: 82-XX
Full Text:
### Maximal and minimal iterative positive solutions for $$p$$-Laplacian Hadamard fractional differential equations with the derivative term contained in the nonlinear term. (English)Zbl 07533441
MSC: 34B16 34B18
Full Text:
Full Text:
### The heat equation for the Dirichlet fractional Laplacian with Hardy’s potentials: properties of minimal solutions and blow-up. (English)Zbl 07523886
MSC: 35K05 35B09 35S16
### Existence and simulation of positive solutions for $$m$$-point fractional differential equations with derivative terms. (English)Zbl 1485.34052
MSC: 34A08 34B18 34K37
Full Text:
### Multiplicity of positive radial solutions for systems with mean curvature operator in Minkowski space. (English)Zbl 1484.34084
MSC: 34B18 35J66
Full Text:
### Positive solutions of BVPs on the half-line involving functional BCs. (English)Zbl 1484.34085
MSC: 34B18 34B40 34B10
Full Text:
Full Text:
### Bifurcation curves and exact multiplicity of positive solutions for Dirichlet problems with the Minkowski-curvature equation. (English)Zbl 07509925
MSC: 34B09 34B18 34C23
Full Text:
### Fixed point theorems in the study of positive strict set-contractions. (English)Zbl 07493303
MSC: 47H10 47H11 47H08
Full Text:
### Application of generalised Riccati equations to analysis of asymptotic forms of solutions of perturbed half-linear ordinary differential equations. (English)Zbl 1482.34130
MSC: 34D05 34E05
Full Text:
MSC: 15A24
Full Text:
### Positive periodic solution for inertial neural networks with time-varying delays. (English)Zbl 07486828
MSC: 34-XX 92-XX
Full Text:
Full Text:
### A stochastic predator-prey model with Holling II increasing function in the predator. (English)Zbl 07484769
MSC: 92D25 60H10
Full Text:
### Positive definite solutions of a linearly perturbed matrix equation. (English)Zbl 07477939
MSC: 65F10 15A24
Full Text:
### Existence of positive solutions for a pertubed fourth-order equation. (English)Zbl 07477623
MSC: 34B15 34B18 58E05
Full Text:
Full Text:
### A system of nonlinear fractional BVPs with $$\varphi$$-Laplacian operators and nonlocal conditions. (English)Zbl 1478.34012
MSC: 34A08 34B10 34B18
Full Text:
Full Text:
Full Text:
### Positive solutions of higher order nonlinear fractional differential equations with nonlocal initial conditions at resonance. (English)Zbl 07458951
MSC: 34A08 34B15
Full Text:
### Existence and uniqueness of a positive solution to a boundary value problem for a second order functional-differential equation. (English. Russian original)Zbl 1481.34083
Russ. Math. 65, No. 12, 1-5 (2021); translation from Izv. Vyssh. Uchebn. Zaved., Mat. 2021, No. 12, 3-8 (2021).
MSC: 34K10 45J05 45M20
Full Text:
Full Text:
Full Text:
### Multiplicity of positive solutions to semi-linear elliptic problems on metric graphs. (English)Zbl 07451060
MSC: 34B45 34B18
Full Text:
### Existence results on positive solution for a class of nonlocal elliptic systems. (English)Zbl 07449402
MSC: 35B09 35J60
MSC: 11D09
### The positive integer solutions to multivariate Euler function equation $$\varphi({x_1}{x_2} \cdots {x_n}) = {k_1}\varphi({x_1}) + {k_2}\varphi({x_2}) + \cdots + {k_n}\varphi({x_n}) \pm l$$. (Chinese. English summary)Zbl 07448803
MSC: 11B68 11D72
Full Text:
### Existence and multiplicity of positive solutions for singular third-order three-point boundary value problem. (Chinese. English summary)Zbl 07448460
MSC: 34B18 34B16 47N20
Full Text:
### Existence of radial positive solutions for a class of semipositone elliptic equations. (Chinese. English summary)Zbl 07448452
MSC: 35B09 35J99
Full Text:
### Positive solutions for a class of elastic beam equations with indefinite weights. (Chinese. English summary)Zbl 07448450
MSC: 34B18 47N20
Full Text:
MSC: 39-XX
Full Text:
### Existence and multiplicity of positive solutions for Robin problem of one-dimensional prescribed mean curvature equation in Minkowski space. (Chinese. English summary)Zbl 07448440
MSC: 34B18 47N20 34C10
Full Text:
Full Text:
all top 5
all top 5
all top 5
all top 3
all top 3 |
# How can I prove this closed form for $\sum_{n=1}^\infty\frac{(4n)!}{\Gamma\left(\frac23+n\right)\,\Gamma\left(\frac43+n\right)\,n!^2\,(-256)^n}$
How can I prove the following conjectured identity? $$\mathcal{S}=\sum_{n=1}^\infty\frac{(4\,n)!}{\Gamma\left(\frac23+n\right)\,\Gamma\left(\frac43+n\right)\,n!^2\,(-256)^n}\stackrel?=\frac{\sqrt3}{2\,\pi}\left(2\sqrt{\frac8{\sqrt\alpha}-\alpha}-2\sqrt\alpha-3\right),$$ where $$\alpha=2\sqrt[3]{1+\sqrt2}-\frac2{\sqrt[3]{1+\sqrt2}}.$$ The conjecture is equivalent to saying that $\pi\,\mathcal{S}$ is the root of the polynomial $$256 x^8-6912 x^6-814752 x^4-13364784 x^2+531441,$$ belonging to the interval $-1<x<0$.
The summand came as a solution to the recurrence relation $$\begin{cases}a(1)=-\frac{81\sqrt3}{512\,\pi}\\\\a(n+1)=-\frac{9\,(2n+1)(4n+1)(4 n+3)}{32\,(n+1)(3n+2)(3n+4)}a(n)\end{cases}.$$ The conjectured closed form was found using computer based on results of numerical summation. The approximate numeric result is $\mathcal{S}=-0.06339748327393640606333225108136874...$ (click to see 1000 digits).
-
Wow! Either there's some slick trick there, or some hard development of something, or else...a miracle's needed here! Where does this come from, context, what have you done so far...? – DonAntonio May 25 '13 at 22:51
Where does this bizarre formula(s) come from? I'd see what happens when the gamma values are expressed in terms of factorial powers (by using the recurrence), it might end up looking like a multinomial coefficient of sorts... – vonbrand May 25 '13 at 22:55
I think expressing $(4n)!=\Gamma(4n+1)$, and using 4-multiplication formula for the gamma function, this becomes some $_pF_q$-function evaluated at particular values of parameters and independent variable (and there one has a lot of funny formulas to play with). It doesn't seem to me to be a real question, like many others of the same kind recently. However, I will not develop this further - the last time my answer ended with a warning from moderator team. – O.L. May 25 '13 at 23:02
Indeed. It's something Ramanujan would come up with. Wolframalpha says the ratio test is inconclusive, but can give a numerical value. – Tito Piezas III May 25 '13 at 23:04
@O.L. I'm inclined to agree about this question, and I would also really like to know how, given a number, one finds precisely the integer polynomial of 8-th degree that it turns out to be the root of. There must be so many polynomials of not too large a degree with approximately the right roots, especially given the magnitude of coefficients. – Kirill May 26 '13 at 2:50
According to Mathematica, the sum is $$\frac{3}{\Gamma(\frac13)\Gamma(\frac23)}\left( -1 + {}_3F_2\left(\frac14,\frac12,\frac34; \frac23,\frac43; -1\right) \right).$$
This form is actually quite straightforward if you write out $(4n)!$ as $$4^{4n}n!(1/4)_n (1/2)_n (3/4)_n$$ using rising powers ("Pochhammer symbols") and then use the definition of a hypergeometric function.
The hypergeometric function there can be handled with equation 25 here: http://mathworld.wolfram.com/HypergeometricFunction.html: $${}_3F_2\left(\frac14,\frac12,\frac34; \frac23,\frac43; y\right)=\frac{1}{1-x^k},$$ where $k=3$, $0\leq x\leq (1+k)^{-1/k}$ and $$y = \left(\frac{x(1-x^k)}{f_k}\right)^k, \qquad f_k = \frac{k}{(1+k)^{(1+1/k)}}.$$
Now setting $y=-1$, we get the polynomial equation in $x$ $$\frac{256}{27} x^3 \left(1-x^3\right)^3 = -1,$$ which has two real roots, neither of them in the necessary interval $[0,(1+k)^{-1/k}=4^{-1/3}]$, since one is $-0.43\ldots$ and the other $1.124\ldots$. However, one of those roots, $x_1=-0.436250\ldots$ just happens to give the (numerically at least) right answer, so never mind that.
Also, note that $$\Gamma(1/3)\Gamma(2/3) = \frac{2\pi}{\sqrt{3}}.$$
The polynomial equation above is in terms of $x^3$, so we can simplify that too a little, so the answer is that the sum equals $$\frac{3^{3/2}}{2\pi} \left(-1+(1-z_1)^{-1}\right),$$ where $z_1$ is a root of the polynomial equation $$256z(1-z)^3+27=0, \qquad z_1=-0.0830249175076244\ldots$$ (The other real root is $\approx 1.42$.)
How did you find the conjectured closed form?
-
I calculated the sum in Mathematica and then tried RootApproximant[N[(3 Sqrt[3] (-1 + HypergeometricPFQ[{1/4, 1/2, 3/4}, {2/3, 4/3}, -1]))/(2 Pi), 100]] - it returned a polynomial of degree 33, which failed to match the expression numerically when I used a higher precision. Then I guessed it could work if the factor 1\Pi were excluded and tried RootApproximant[N[(3 Sqrt[3] (-1 + HypergeometricPFQ[{1/4, 1/2, 3/4}, {2/3, 4/3}, -1]))/2, 100]] - it returned a polynomial of degree 8 (the one I mentioned in the question), which agreed with the expression numerically to a very high precision. – Hanna K. May 26 '13 at 3:30
RootApproximant is really powerful tool to find algebraic closed forms if you have enough precision. Sometimes it works better if you explicitly set the maximum degree too a reasonable value (4, 6 or 8) as the second argument. – Hanna K. May 26 '13 at 3:38
Thanks for explaining, that's quite interesting. – Kirill May 26 '13 at 4:11 |
This post isn’t about the first macro I wrote in the Julia programming language, but it can be about your first macro.
## Frequently Asked Questions
Question: What are macros in Julia?
Answer: Macros are sort of functions which take as input unevaluated expressions (Expr) and return as output another expression, whose code is then regularly evaluated at runtime. This post isn’t a substitute for reading the section about macros in the Julia documentation, it’s more complementary to it, a light tutorial for newcomers, but I warmly suggest reading the manual to learn more about them.
Calls to macros are different from calls to regular functions because you need to prepend the name of the macro with the at-sign @ sign: @view A[4, 1:5] is a call to the @view macro, view(A, 4, 1:5) is a call to the view function. There are a couple of important reasons why macro calls are visually distinct from function calls (which sometimes upsets Lisp purists):
• the arguments of function calls are always evaluated before entering the body of the function: in f(g(2, 5.6)), we know that the function g(2, 5.6) will always be evaluated before calling f on its result. Instead, the expression which is given as input to a macro can in principle be discarded and never taken into account and the macro can be something completely different: in @f g(2, 5.6), the expression g(2, 5.6) is taken unevaluated and rewritten into something else. The function g may actually never be called, or it may be called with different arguments, or whatever the macro @f has been designed to rewrite the given expression;
• nested macro calls are evaluated left-to-right, while nested function calls are evaluated right-to-left: in the expression f(g()), the function g() is first called and then fed into f, instead in @f @g x the macro @f will first rewrite the unevaluated expression @g x, and then, if it is still there after the expression-rewrite operated by @f (remember the previous point?), the macro @g is expanded.
Writing a macro can also be an unpleasant experience the first times you try it, because a macro operates on a new level (expressions, which you want to turn into other expressions) and you need to get familiar with new concepts, like hygiene (more on this below), which can be tricky to get initially right. However it may help remembering that the Expr macros operate on are regular Julia objects, which you can access and modify like any other Julia structure. So you can think of a macro as a regular function which operates on Expr objects, to return a new Expr. This isn’t a simplification: many macros are actually defined to only call a regular function which does all the expression rewriting business.
Q: Should I write a macro?
A: If you’re asking yourself this question, the answer is likely “no”. Steven G. Johnson gave an interesting keynote speech at JuliaCon 2019 about metaprogramming (not just in Julia), explaining when to use it, and more importantly when not to use it.
Also, macros don’t compose very well: remember that any macro can rewrite an expression in a completely arbitrary way, so nesting macros can sometimes have unexpected results, if the outermost macro doesn’t anticipate the possibility the expressions it operates on may contain another macro which expects a specific expression. In practice, this is less of a problem than it may sound, but it can definitely happens if you overuse many complicated macros. This is one more reason why you should not write a macro in your code unless it’s really necessary to substantially simplify the code.
Q: So, what’s the deal with macros?
A: Macros are useful to programmatically generate code which would be tedious, or very complicated, to type manually. Keep in mind that the goal of a macro is to eventually get a new expression, which is run when the macro is called, so the generated code will be executed regularly, no shortcut about that, which is sometimes a misconception. There is very little that plain (i.e. not using macros) Julia code cannot do that macros can.
Q: Why I should keep reading this post?
A: Writing a simple macro can still be a useful exercise to learn how it works, and understand when they can be useful.
## Our first macro: @stable
Globals in Julia are usually a major performance hit, when their type is not constant, because the compiler have no idea what’s their actual type. When you don’t need to reassign a global variable, you can mark it as constant with the const keyword, which greatly improves performance of accessing a global variable, because the compiler will know its type and can reason about it.
Julia v1.8 introduces a new way to have non-horribly-slow global variables: you can annotate the type of a global variable, to say that its type won’t change:
julia> x::Float64 = 3.14
3.14
julia> x = -4.2
-4.2
julia> x = 10
10
julia> x
10.0
julia> x = nothing
ERROR: MethodError: Cannot convert an object of type Nothing to an object of type Float64
Closest candidates are:
convert(::Type{T}, ::T) where T<:Number at number.jl:6
convert(::Type{T}, ::Number) where T<:Number at number.jl:7
convert(::Type{T}, ::Base.TwicePrecision) where T<:Number at twiceprecision.jl:273
...
This is different from a const variable, because you can reassign a type-annotated global variable to another value with the same type (or convertible to the annotated type), while reassigning a const variable isn’t possible (nor recommended). But the type would still be constant, which helps the compiler optimising code which accesses this global variable. Note that x = 10 returns 10 (an Int) but the actual value of x is 10.0 (a Float64) because assignment returns the right-hand side but the value 10 is converted to Float64 before assigning it to x.
Wait, there is no macro so far! Right, we’ll get there soon. The problem with type-annotated globals is that in the expression x::Float64 = 3.14 it’s easy to predict the type we want to attach to x, but if you want to make x = f() a type-annotated global variable and the type of f() is much more involved than Float64, perhaps a type with few parameters, then doing the type annotation can be annoying. Mind, not impossible, just tedious. So, that’s where a macro could come in handy!
The idea is to have a macro, which we’ll call @stable, which operates like this:
@stable x = 2
will automatically run something like
x::typeof(2) = 2
so that we can automatically infer the type of x from the expression on the right-hand side, without having to type it ourselves. A useful tool when dealing with macros is Meta.@dump (another macro, yay!).
julia> Meta.@dump x = 2
Expr
head: Symbol =
args: Array{Any}((2,))
1: Symbol x
2: Int64 2
This tells us how the expression x = 2 is parsed into an Expr, when it’s fed into a macro. So, this means that our @stable x = 2 macro will see an Expr whose ex.args[1] field is the name of the variable we want to create and ex.args[2] is the value we want to assign to it, which means the expression we want to generate will be something like ex.args[1]::typeof(ex.args[2]) = ex.args[2], but remember that you need to interpolate variables inside a quoted expression:
julia> macro stable(ex::Expr)
return :( $(ex.args[1])::typeof($(ex.args[2])) = $(ex.args[2]) ) end @stable (macro with 1 method) julia> @stable x = 2 2 Well, that was easy, it worked at the first try! Now we can use our brand-new type-stable x! Let’s do it! julia> x ERROR: UndefVarError: x not defined Waaat! What happened to our x, we just defined it above! Didn’t we? Well, let’s use yet another macro, @macroexpand, to see what’s going on: julia> @macroexpand @stable x = 2 :(var"#2#x"::Main.typeof(2) = 2) Uhm, that looks weird, we were expecting the expression to be x::typeof(2), what’s that var"#2#x"? Let’s see: julia> var"#2#x" ERROR: UndefVarError: #2#x not defined Another undefined variable, I’m more and more confused. What if that 2 in there is a global counter? Maybe we need to try with 1: julia> var"#1#x" 2 julia> var"#1#x" = 5 5 julia> var"#1#x" 5 julia> var"#1#x" = nothing ERROR: MethodError: Cannot convert an object of type Nothing to an object of type Int64 Closest candidates are: convert(::Type{T}, ::T) where T<:Number at number.jl:6 convert(::Type{T}, ::Number) where T<:Number at number.jl:7 convert(::Type{T}, ::Base.TwicePrecision) where T<:Number at twiceprecision.jl:273 ... Hey, here is our variable, and it’s working as we expected! But this isn’t as convenient as calling the variable x as we wanted. I don’t like that. what’s happening? Alright, we’re now running into hygiene, which we mentioned above: this isn’t about washing your hands, but about the fact macros need to make sure the variables in the returned expression don’t accidentally clash with variables in the scope they expand to. This is achieved by using the gensym function to automatically generate unique identifiers (in the current module) to avoid clashes with local variables. What happened above is that our macro generated a variable with a gensym-ed name, instead of the name we used in the expression, because macros in Julia are hygienic by default. To opt out of this mechanism, we can use the esc function. A rule of thumb is that you should apply esc on input arguments if they contain variables or identifiers from the scope of the calling site that you need use as they are, but for more details do read the section about hygiene in the Julia manual. Note also that the pattern var"#N#x", with increasing N at every macro call, in the gensym-ed variable name is an implementation detail which may change in future versions of Julia, don’t rely on it. Now we should know how to fix the @stable macro: julia> macro stable(ex::Expr) return :($(esc(ex.args[1]))::typeof($(esc(ex.args[2]))) =$(esc(ex.args[2])) )
end
@stable (macro with 1 method)
julia> @stable x = 2
2
julia> x
2
julia> x = 4.0
4.0
julia> x
4
julia> x = "hello world"
ERROR: MethodError: Cannot convert an object of type String to an object of type Int64
Closest candidates are:
convert(::Type{T}, ::T) where T<:Number at number.jl:6
convert(::Type{T}, ::Number) where T<:Number at number.jl:7
convert(::Type{T}, ::Base.TwicePrecision) where T<:Number at twiceprecision.jl:273
...
julia> @macroexpand @stable x = 2
:(x::Main.typeof(2) = 2)
Cool, this is all working as expected! Are we done now? Yes, we’re heading into the right direction, but no, we aren’t quite done yet. Let’s consider a more sophisticated example, where the right-hand side is a function call and not a simple literal number, which is why we started all of this. For example, let’s define a new type-stable variable with a rand() value, and let’s print it with one more macro, @show, just to be sure:
julia> @stable y = @show(rand())
rand() = 0.19171602949009747
rand() = 0.5007039099074341
0.5007039099074341
julia> y
0.5007039099074341
Ugh, that doesn’t look good. We’re calling rand() twice and getting two different values? Let’s ask again our friend @macroexpand what’s going on (no need to use @show this time):
julia> @macroexpand @stable y = rand()
:(y::Main.typeof(rand()) = rand())
Oh, I think I see it now: the way we defined the macro, the same expression, rand(), is used twice: once inside typeof, and then on the right-hand side of the assignment, but this means we’re actually calling that function twice, even though the expression is the same. Correct! And this isn’t good for at least two reasons:
• the expression on the right-hand side of the assignment can be expensive to run, and calling it twice wouldn’t be a good outcome: we wanted to create a macro to simply things, not to spend twice as much time;
• the expression on the right-hand side of the assignment can have side effects, which is precisely the case of the rand() function: every time you call rand() you’re advancing the mutable state of the random number generator, but if you call it twice instead of once, you’re doing something unexpected. By simply looking at the code @stable y = rand(), someone would expect that rand() is called exactly once, it’d be bad if users of your macro would experience undesired side effects, which can make for hard-to-debug issues.
In order to avoid double evaluation of the expression, we can assign it to another temporary variable, and then use its value in the assignment expression:
julia> macro stable(ex::Expr)
quote
tmp = $(esc(ex.args[2]))$(esc(ex.args[1]))::typeof(tmp) = tmp
end
end
@stable (macro with 1 method)
julia> @stable y = @show(rand())
rand() = 0.5954734423582769
0.5954734423582769
This time rand() was called only once! That’s good, isn’t it? It is indeed, but I think we can still improve the macro a little bit. For example, let’s look at the list of all names defined in the current module with names. Can you spot anything strange?
julia> names(@__MODULE__; all=true)
13-element Vector{Symbol}:
Symbol("##meta#58")
Symbol("#1#2")
Symbol("#1#x")
Symbol("#5#tmp")
Symbol("#@stable")
Symbol("@stable")
:Base
:Core
:InteractiveUtils
:Main
:ans
:x
:y
I have a baad feeling about that Symbol("#5#tmp"). Are we leaking the temporary variable in the returned expression? Correct! Admittedly, this isn’t a too big of a deal, the variable is gensym-ed and so it won’t clash with any other local variables thanks to hygiene, many people would just ignore this minor issue, but I believe it’d still be nice to avoid leaking it in the first place, if possible. We can do that by sticking the local keyword in front of the temporary variable:
julia> macro stable(ex::Expr)
quote
local tmp = $(esc(ex.args[2]))$(esc(ex.args[1]))::typeof(tmp) = tmp
end
end
@stable (macro with 1 method)
julia> @stable y = rand()
0.7029553059625194
julia> @stable y = rand()
0.04552255224129409
julia> names(@__MODULE__; all=true)
13-element Vector{Symbol}:
Symbol("##meta#58")
Symbol("#1#2")
Symbol("#1#x")
Symbol("#5#tmp")
Symbol("#@stable")
Symbol("@stable")
:Base
:Core
:InteractiveUtils
:Main
:ans
:x
:y
Yay, no other leaked temporary variables! Are we done now? Not yet, we can still make it more robust. At the moment we’re assuming that the expression fed into our @stable macro is an assignment, but what if it isn’t the case?
julia> @stable x * 12
4
julia> x
4
Uhm, it doesn’t look like anything happened, x is still 4, maybe we can ignore also this case and move on. Not so fast:
julia> 1 * 2
ERROR: MethodError: objects of type Int64 are not callable
Maybe you forgot to use an operator such as *, ^, %, / etc. ?
Aaargh! What does that even mean?!? Let’s ask our dear friends Meta.@dump and @macroexpand:
julia> Meta.@dump x * 2
Expr
head: Symbol call
args: Array{Any}((3,))
1: Symbol *
2: Symbol x
3: Int64 2
julia> @macroexpand @stable x * 12
quote
var"#5#tmp" = x
(*)::Main.typeof(var"#5#tmp") = var"#5#tmp"
end
julia> *
4
Let me see if I follow: with @stable x * 12 we’re assigning x (which is now ex.args[2]) to the temporary variable, and then the assignment is basically * = 4, because ex.args[1] is now *. Ooops. Brilliant! In particular we’re shadowing * in the current scope (for example the Main module in the REPL, if you’re following along in the REPL) with the number 4, the expression 1 * 2 is actually equivalent to *(1, 2), and since * is 4
julia> 4(1, 2)
ERROR: MethodError: objects of type Int64 are not callable
Maybe you forgot to use an operator such as *, ^, %, / etc. ?
Gotcha! So we should validate the input? Indeed, we should make sure the expression passed to the macro is what we expect, that is an assignment. We’ve already seen before that this means ex.head should be the symbol =. We should also make sure the left-hand side is only a variable name, we don’t want to mess up with indexing expressions like A[1] = 2:
julia> Meta.@dump A[1] = 2
Expr
head: Symbol =
args: Array{Any}((2,))
1: Expr
head: Symbol ref
args: Array{Any}((2,))
1: Symbol A
2: Int64 1
2: Int64 2
Right, so ex.head should be only = and ex.args[1] should only be another symbol. In the other cases we should throw a useful error message. You’re getting the hang of it!
julia> macro stable(ex::Expr)
(ex.head === :(=) && ex.args[1] isa Symbol) || throw(ArgumentError("@stable: $(ex) is not an assigment expression.")) quote tmp =$(esc(ex.args[2]))
\$(esc(ex.args[1]))::typeof(tmp) = tmp
end
end
@stable (macro with 1 method)
julia> @stable x * 12
ERROR: LoadError: ArgumentError: @stable: x * 12 is not an assigment expression.
julia> @stable A[1] = 2
ERROR: LoadError: ArgumentError: @stable: A[1] = 2 is not an assigment expression.
Awesome, I think I’m now happy with my first macro! Love it! Yes, now it works pretty well and it has also good handling of errors! Nice job!
## Conclusions
I hope this post was instructive to learn how to write a very basic macro in Julia. In the end, the macro we wrote is quite short and not very complicated, but we ran into many pitfalls along the way: hygiene, thinking about corner cases of expressions, avoiding repeated undesired evaluations and introducing extra variables in the scope of the macro’s call-site. This also shows the purpose of macros: rewriting expressions into other ones, to simplify writing more complicated expressions or programmatically write more code.
This post is inspired by a macro which I wrote some months ago for the Seven Lines of Julia thread on JuliaLang Discourse.
This was cool! Where can I learn more about macros? Good to hear! I hope now you aren’t going to abuse macros though! But if you do want to learn something more about macros, in addition to the official documentation, some useful resources are: |
# 2.5: Relations and Functions
##### Learning Objectives
By the end of this section, you will be able to:
• Find the domain and range of a relation
• Determine if a relation is a function
• Find the value of a function
Before you get started, take this readiness quiz.
1. Evaluate $$3x−5$$ when $$x=−2$$.
If you missed this problem, review [link].
2. Evaluate $$2x^2−x−3$$ when $$x=a$$.
If you missed this problem, review [link].
3. Simplify: $$7x−1−4x+5$$.
If you missed this problem, review [link].
## Find the Domain and Range of a Relation
As we go about our daily lives, we have many data items or quantities that are paired to our names. Our social security number, student ID number, email address, phone number and our birthday are matched to our name. There is a relationship between our name and each of those items.
When your professor gets her class roster, the names of all the students in the class are listed in one column and then the student ID number is likely to be in the next column. If we think of the correspondence as a set of ordered pairs, where the first element is a student name and the second element is that student’s ID number, we call this a relation.
$(\text{Student name}, \text{ Student ID #})\nonumber$
The set of all the names of the students in the class is called the domain of the relation and the set of all student ID numbers paired with these students is the range of the relation.
There are many similar situations where one variable is paired or matched with another. The set of ordered pairs that records this matching is a relation.
##### Definition: Relation
A relation is any set of ordered pairs, $$(x,y)$$. All the x-values in the ordered pairs together make up the domain. All the y-values in the ordered pairs together make up the range.
##### Example $$\PageIndex{1}$$
For the relation $${(1,1),(2,4),(3,9),(4,16),(5,25)}$$:
1. Find the domain of the relation.
2. Find the range of the relation.
$\begin{array} {ll} {} &{ {\{(1,1), (2,4), (3,9), (4,16), (5,25) }\} } \\ {ⓐ\text{ The domain is the set of all x-values of the relation.}} &{ {\{1,2,3,4,5}\} } \\ {ⓑ\text{ The range is the set of all y-values of the relation.}} &{ {\{1,4,9,16,25}\} } \\ \nonumber \end{array}$
##### Example $$\PageIndex{2}$$
For the relation $${\{(1,1),(2,8),(3,27),(4,64),(5,125)}\}$$:
1. Find the domain of the relation.
2. Find the range of the relation.
$${\{1,2,3,4,5}\}$$
$${\{1,8,27,64,125}\}$$
##### Example $$\PageIndex{3}$$
For the relation $${\{(1,3),(2,6),(3,9),(4,12),(5,15)}\}$$:
1. Find the domain of the relation.
2. Find the range of the relation.
$${\{1,2,3,4,5}\}$$
$${\{3,6,9,12,15}\}$$
##### MAPPING
A mapping is sometimes used to show a relation. The arrows show the pairing of the elements of the domain with the elements of the range.
##### Example $$\PageIndex{4}$$
Use the mapping of the relation shown to
1. list the ordered pairs of the relation,
2. find the domain of the relation, and
3. find the range of the relation.
ⓐ The arrow shows the matching of the person to their birthday. We create ordered pairs with the person’s name as the x-value and their birthday as the y-value.
{(Alison, April 25), (Penelope, May 23), (June, August 2), (Gregory, September 15), (Geoffrey, January 12), (Lauren, May 10), (Stephen, July 24), (Alice, February 3), (Liz, August 2), (Danny, July 24)}
ⓑ The domain is the set of all x-values of the relation.
{Alison, Penelope, June, Gregory, Geoffrey, Lauren, Stephen, Alice, Liz, Danny}
ⓒ The range is the set of all y-values of the relation.
{January 12, February 3, April 25, May 10, May 23, July 24, August 2, September 15}
##### Example $$\PageIndex{5}$$
Use the mapping of the relation shown to
1. list the ordered pairs of the relation
2. find the domain of the relation
3. find the range of the relation.
ⓐ (Khanh Nguyen, kn68413), (Abigail Brown, ab56781), (Sumantha Mishal, sm32479), (Jose Hern and ez, jh47983)
ⓑ {Khanh Nguyen, Abigail Brown, Sumantha Mishal, Jose Hern and ez}
ⓒ {kn68413, ab56781, sm32479, jh47983}
##### Example $$\PageIndex{6}$$
Use the mapping of the relation shown to
1. list the ordered pairs of the relation
2. find the domain of the relation
3. find the range of the relation.
ⓐ (Maria, November 6), (Arm and o, January 18), (Cynthia, December 8), (Kelly, March 15), (Rachel, November 6)
ⓑ {Maria, Arm and o, Cynthia, Kelly, Rachel}
ⓒ{November 6, January 18, December 8, March 15}
A graph is yet another way that a relation can be represented. The set of ordered pairs of all the points plotted is the relation. The set of all x-coordinates is the domain of the relation and the set of all y-coordinates is the range. Generally we write the numbers in ascending order for both the domain and range.
##### Example $$\PageIndex{7}$$
Use the graph of the relation to
1. list the ordered pairs of the relation
2. find the domain of the relation
3. find the range of the relation.
ⓐ The ordered pairs of the relation are: ${\{(1,5),(−3,−1),(4,−2),(0,3),(2,−2),(−3,4)}\}.\nonumber$
ⓑ The domain is the set of all x-values of the relation: $$\quad {\{−3,0,1,2,4}\}$$.
Notice that while $$−3$$ repeats, it is only listed once.
ⓒ The range is the set of all y-values of the relation: $$\quad {\{−2,−1,3,4,5}\}$$.
Notice that while $$−2$$ repeats, it is only listed once.
##### Example $$\PageIndex{8}$$
Use the graph of the relation to
1. list the ordered pairs of the relation
2. find the domain of the relation
3. find the range of the relation.
ⓐ $$(−3,3),(−2,2),(−1,0),$$
$$(0,−1),(2,−2),(4,−4)$$
ⓑ $${\{−3,−2,−1,0,2,4}\}$$
ⓒ $${\{3,2,0,−1,−2,−4}\}$$
##### Example $$\PageIndex{9}$$
Use the graph of the relation to
1. list the ordered pairs of the relation
2. find the domain of the relation
3. find the range of the relation.
ⓐ $$(−3,0),(−3,5),(−3,−6),$$
$$(−1,−2),(1,2),(4,−4)$$
ⓑ $${\{−3,−1,1,4}\}$$
ⓒ $${\{−6,0,5,−2,2,−4}\}$$
## Determine if a Relation is a Function
A special type of relation, called a function, occurs extensively in mathematics. A function is a relation that assigns to each element in its domain exactly one element in the range. For each ordered pair in the relation, each x-value is matched with only one y-value.
##### Definition: Function
A function is a relation that assigns to each element in its domain exactly one element in the range.
The birthday example from Example helps us understand this definition. Every person has a birthday but no one has two birthdays. It is okay for two people to share a birthday. It is okay that Danny and Stephen share July 24th as their birthday and that June and Liz share August 2nd. Since each person has exactly one birthday, the relation in Example is a function.
The relation shown by the graph in Example includes the ordered pairs $$(−3,−1)$$ and $$(−3,4)$$. Is that okay in a function? No, as this is like one person having two different birthdays.
##### Example $$\PageIndex{10}$$
Use the set of ordered pairs to (i) determine whether the relation is a function (ii) find the domain of the relation (iii) find the range of the relation.
1. $${\{(−3,27),(−2,8),(−1,1),(0,0),(1,1),(2,8),(3,27)}\}$$
2. $${\{(9,−3),(4,−2),(1,−1),(0,0),(1,1),(4,2),(9,3)}\}$$
ⓐ $${\{(−3,27),(−2,8),(−1,1),(0,0),(1,1),(2,8),(3,27)}\}$$
(i) Each x-value is matched with only one y-value. So this relation is a function.
(ii) The domain is the set of all x-values in the relation.
The domain is: $${\{−3,−2,−1,0,1,2,3}\}$$.
(iii) The range is the set of all y-values in the relation. Notice we do not list range values twice.
The range is: $${\{27,8,1,0}\}$$.
ⓑ $${\{(9,−3),(4,−2),(1,−1),(0,0),(1,1),(4,2),(9,3)}\}$$
(i) The x-value 9 is matched with two y-values, both 3 and $$−3$$. So this relation is not a function.
(ii) The domain is the set of all x-values in the relation. Notice we do not list domain values twice.
The domain is: $${\{0,1,2,4,9}\}$$.
(iii) The range is the set of all y-values in the relation.
The range is: $${\{−3,−2,−1,0,1,2,3}\}$$.
##### Example $$\PageIndex{11}$$
Use the set of ordered pairs to (i) determine whether the relation is a function (ii) find the domain of the relation (iii) find the range of the function.
1. $${\{(−3,−6),(−2,−4),(−1,−2),(0,0),(1,2),(2,4),(3,6)}\}$$
2. $${\{(8,−4),(4,−2),(2,−1),(0,0),(2,1),(4,2),(8,4)}\}$$
ⓐ Yes; $${\{−3,−2,−1,0,1,2,3}\}$$;
$${\{−6,−4,−2,0,2,4,6}\}$$
ⓑ No; $${\{0,2,4,8}\}$$;
$${\{−4,−2,−1,0,1,2,4}\}$$
##### Example $$\PageIndex{12}$$
Use the set of ordered pairs to (i) determine whether the relation is a function (ii) find the domain of the relation (iii) find the range of the relation.
1. $${\{(27,−3),(8,−2),(1,−1),(0,0),(1,1),(8,2),(27,3)}\}$$
2. $${\{(7,−3),(−5,−4),(8,−0),(0,0),(−6,4),(−2,2),(−1,3)}\}$$
ⓐ No; $${\{0,1,8,27}\}$$;
$${\{−3,−2,−1,0,2,2,3}\}$$
ⓑ Yes; $${\{7,−5,8,0,−6,−2,−1}\}$$;
$${\{−3,−4,0,4,2,3}\}$$
##### Example $$\PageIndex{13}$$
Use the mapping to
1. determine whether the relation is a function
2. find the domain of the relation
3. find the range of the relation.
ⓐ Both Lydia and Marty have two phone numbers. So each x-value is not matched with only one y-value. So this relation is not a function.
ⓑ The domain is the set of all x-values in the relation. The domain is: {Lydia, Eugene, Janet, Rick, Marty}
ⓒ The range is the set of all y-values in the relation. The range is:
$${\{321-549-3327, 427-658-2314, 321-964-7324, 684-358-7961, 684-369-7231, 798-367-8541}\}$$
##### Example $$\PageIndex{14}$$
Use the mapping to ⓐ determine whether the relation is a function ⓑ find the domain of the relation ⓒ find the range of the relation.
ⓐ no ⓑ {NBC, HGTV, HBO} ⓒ {Ellen Degeneres Show, Law and Order, Tonight Show, Property Brothers, House Hunters, Love it or List it, Game of Thrones, True Detective, Sesame Street}
##### Example $$\PageIndex{15}$$
Use the mapping to
1. determine whether the relation is a function
2. find the domain of the relation
3. find the range of the relation.
ⓐ No ⓑ {Neal, Krystal, Kelvin, George, Christa, Mike} ⓒ {123-567-4839 work, 231-378-5941 cell, 743-469-9731 cell, 567-534-2970 work, 684-369-7231 cell, 798-367-8541 cell, 639-847-6971 cell}
In algebra, more often than not, functions will be represented by an equation. It is easiest to see if the equation is a function when it is solved for y. If each value of x results in only one value of y, then the equation defines a function.
##### Example $$\PageIndex{16}$$
Determine whether each equation is a function.
1. $$2x+y=7$$
2. $$y=x^2+1$$
3. $$x+y^2=3$$
ⓐ $$2x+y=7$$
For each value of x, we multiply it by $$−2$$ and then add 7 to get the y-value
For example, if $$x=3$$:
We have that when $$x=3$$, then $$y=1$$. It would work similarly for any value of x. Since each value of x, corresponds to only one value of y the equation defines a function.
ⓑ $$y=x^2+1$$
For each value of x, we square it and then add 1 to get the y-value.
For example, if $$x=2$$:
We have that when $$x=2$$, then $$y=5$$. It would work similarly for any value of x. Since each value of x, corresponds to only one value of y the equation defines a function.
Isolate the y term. Let’s substitute $$x=2$$. This give us two values for y. $$y=1\space y=−1$$
We have shown that when $$x=2$$, then $$y=1$$ and $$y=−1$$. It would work similarly for any value of x. Since each value of x does not corresponds to only one value of y the equation does not define a function.
##### Example $$\PageIndex{17}$$
Determine whether each equation is a function.
1. $$4x+y=−3$$
2. $$x+y^2=1$$
3. $$y−x^2=2$$
ⓐ yes ⓑ no ⓒ yes
##### Example $$\PageIndex{18}$$
Determine whether each equation is a function.
1. $$x+y^2=4$$
2. $$y=x^2−7$$
3. $$y=5x−4$$
ⓐ no ⓑ yes ⓒ yes
## Find the Value of a Function
It is very convenient to name a function and most often we name it f, g, h, F, G, or H. In any function, for each x-value from the domain we get a corresponding y-value in the range. For the function $$f$$, we write this range value $$y$$ as $$f(x)$$. This is called function notation and is read $$f$$ of $$x$$ or the value of $$f$$ at $$x$$. In this case the parentheses does not indicate multiplication.
##### Definition: Function Notation
For the function $$y=f(x)$$
$\begin{array} {l} {f\text{ is the name of the function}} \\{x \text{ is the domain value}} \\ {f(x) \text{ is the range value } y \text{ corresponding to the value } x} \\ \nonumber \end{array}$
We read $$f(x)$$ as $$f$$ of $$x$$ or the value of $$f$$ at $$x$$.
We call x the independent variable as it can be any value in the domain. We call y the dependent variable as its value depends on x.
##### INDEPENDENT AND DEPENDENT VARIABLES
For the function $$y=f(x)$$,
$\begin{array} {l} {x \text{ is the independent variable as it can be any value in the domain}} \\ {y \text{ the dependent variable as its value depends on } x} \\ \nonumber \end{array}$
Much as when you first encountered the variable x, function notation may be rather unsettling. It seems strange because it is new. You will feel more comfortable with the notation as you use it.
Let’s look at the equation $$y=4x−5$$. To find the value of y when $$x=2$$, we know to substitute $$x=2$$ into the equation and then simplify.
Let x=2.
The value of the function at $$x=2$$ is 3.
We do the same thing using function notation, the equation $$y=4x−5$$ can be written as $$f(x)=4x−5$$. To find the value when $$x=2$$, we write:
Let x=2.
The value of the function at $$x=2$$ is 3.
This process of finding the value of $$f(x)$$ for a given value of x is called evaluating the function.
##### Example $$\PageIndex{19}$$
For the function $$f(x)=2x^2+3x−1$$, evaluate the function.
1. $$f(3)$$
2. $$f(−2)$$
3. $$f(a)$$
To evaluate $$f(3)$$, substitute 3 for x. Simplify.
Simplify.
To evaluate f(a),f(a), substitute a for x. Simplify.
##### Example $$\PageIndex{20}$$
For the function $$f(x)=3x^2−2x+1$$, evaluate the function.
1. $$f(3)$$
2. $$f(−1)$$
3. $$f(t)$$
ⓐ $$f(3)=22$$ ⓑ $$f(−1)=6$$ ⓒ $$f(t)=3t^2−2t−1$$
##### Example $$\PageIndex{21}$$
For the function $$f(x)=2x^2+4x−3$$, evaluate the function.
1. $$f(2)$$
2. $$f(−3)$$
3. $$f(h)$$
ⓐ $$(2)=13$$ ⓑ $$f(−3)=3$$
ⓒ $$f(h)=2h2+4h−3$$
In the last example, we found $$f(x)$$ for a constant value of x. In the next example, we are asked to find $$g(x)$$ with values of x that are variables. We still follow the same procedure and substitute the variables in for the x.
##### Example $$\PageIndex{22}$$
For the function $$g(x)=3x−5$$, evaluate the function.
1. $$g(h^2)$$
2. $$g(x+2)$$
3. $$g(x)+g(2)$$
To evaluate $$g(h^2)$$, substitute $$h^2$$ for x.
To evaluate $$g(x+2)$$, substitute $$x+2$$ for x. Simplify.
To evaluate $$g(x)+g(2)$$, first find $$g(2)$$. Simplify.
Notice the difference between part ⓑ and ⓒ. We get $$g(x+2)=3x+1$$ and $$g(x)+g(2)=3x−4$$. So we see that $$g(x+2)\neq g(x)+g(2)$$.
##### Example $$\PageIndex{23}$$
For the function $$g(x)=4x−7$$, evaluate the function.
1. $$g(m^2)$$
2. $$g(x−3)$$
3. $$g(x)−g(3)$$
ⓐ $$4m^2−7$$ ⓑ $$4x−19$$
ⓒ $$x−12$$
##### Example $$\PageIndex{24}$$
For the function $$h(x)=2x+1$$, evaluate the function.
1. $$h(k^2)$$
2. $$h(x+1)$$
3. $$h(x)+h(1)$$
ⓐ $$2k^2+1$$ ⓑ $$2x+3$$
ⓒ $$2x+4$$
Many everyday situations can be modeled using functions.
##### Example $$\PageIndex{25}$$
The number of unread emails in Sylvia’s account is 75. This number grows by 10 unread emails a day. The function $$N(t)=75+10t$$ represents the relation between the number of emails, N, and the time, t, measured in days.
1. Determine the independent and dependent variable.
2. Find $$N(5)$$. Explain what this result means.
ⓐ The number of unread emails is a function of the number of days. The number of unread emails, N, depends on the number of days, t. Therefore, the variable N, is the dependent variable and the variable tt is the independent variable.
ⓑ Find $$N(5)$$. Explain what this result means.
Substitute in t=5.t=5. Simplify.
Since 5 is the number of days, $$N(5)$$, is the number of unread emails after 5 days. After 5 days, there are 125 unread emails in the account.
##### Example $$\PageIndex{26}$$
The number of unread emails in Bryan’s account is 100. This number grows by 15 unread emails a day. The function $$N(t)=100+15t$$ represents the relation between the number of emails, N, and the time, t, measured in days.
1. Determine the independent and dependent variable.
2. Find $$N(7)]$$. Explain what this result means.
t IND; N DEP ⓑ 205; the number of unread emails in Bryan’s account on the seventh day.
##### Example $$\PageIndex{27}$$
The number of unread emails in Anthony’s account is 110. This number grows by 25 unread emails a day. The function $$N(t)=110+25t$$ represents the relation between the number of emails, N, and the time, t, measured in days.
1. Determine the independent and dependent variable.
2. Find $$N(14)$$. Explain what this result means.
t IND; N DEP ⓑ 460; the number of unread emails in Anthony’s account on the fourteenth day
Access this online resource for additional instruction and practice with relations and functions.
## Key Concepts
• Function Notation: For the function $$y=f(x)$$
• f is the name of the function
• x is the domain value
• $$f(x)$$ is the range value y corresponding to the value x
We read $$f(x)$$ as f of x or the value of f at x.
• Independent and Dependent Variables: For the function $$y=f(x)$$,
• x is the independent variable as it can be any value in the domain
• y is the dependent variable as its value depends on x
## Glossary
domain of a relation
The domain of a relation is all the x-values in the ordered pairs of the relation.
function
A function is a relation that assigns to each element in its domain exactly one element in the range.
mapping
A mapping is sometimes used to show a relation. The arrows show the pairing of the elements of the domain with the elements of the range.
range of a relation
The range of a relation is all the y-values in the ordered pairs of the relation.
relation
A relation is any set of ordered pairs,(x,y).(x,y). All the x-values in the ordered pairs together make up the domain. All the y-values in the ordered pairs together make up the range.
This page titled 2.5: Relations and Functions is shared under a CC BY license and was authored, remixed, and/or curated by OpenStax. |
# Scientific Notation Practice
In this scientific notation practice worksheet, students convert numbers from standard notation to scientific notation, from scientific notation to standard notation, they add, subtract, multiply and divide and determine significant figures in their answers. |
# Limit number of switches in employee scheduling problem
Here is a scheduling problem I need to solve. Given the demand for 2 positions in 1 week with 3 shifts per position, I need to allocate the employees accordingly with some extra operational constraints. Note that each employee can work at any position but only one shift per day totally.The main objective here is to minimize the total shift switches within week. First I shall introduce my variables and the constarints and then how I formulated it mathematically.
Binary variables:
1. Employee: $$x_{i}$$, i=1:N
2. Working day per employee: $$y_{i,j}$$, i=1:N | j=1:7
3. Wroking day/shift/position per employee: $$z_{i,j,k,l}$$, i=1:N | j=1:7 | k = 1:3 | l = 1:2
4. Shift switch per employee per day: $$s_{i,j,k}$$, i=1:N | j=1:7 | k = 1:3
Constraints:
1. One shift per day per employee: $$\sum_{k,l} z_{i,j,k,l} \leqslant y_{i,j} \ \ \forall i,j$$
2. One position per day per employee: $$z_{i,j,k,1}+ z_{i,j,k,2} \leqslant1 \ \ \forall i,j,k$$
3. Maximum working days per employee (6 days): $$\sum_{j} y_{i,j} \leq x_{i}\cdot D_{max} \ \ \forall i=1:N$$
4. Minimum working days per employee (5 days): $$\sum_{j} y_{i,j} \geq x_{i}\cdot D_{min} \ \ \forall i=1:N$$
5. Comply with weekly demand (D): $$\sum_{i} z_{i,j,k} = D_{j,k,l} \ \ \forall j,k,l$$
6. Detect any shift switch day over day: $$\sum_{l}z_{i,j,k,l}- \sum_{l}z_{i,j-1,k,l} \leqslant s_{i,j,k} \ \ \forall i,j=2:7,k$$
I followed this method to introduce continuity in shift assignment and reduce switches
Objective:
Minimize shift switches day over day across all employees:$$min \sum_{i,j,k}s_{i,j,k} \ \ \forall i,j=2:7,k$$
Running this in R with ompr package I get this results:
Employees are placed to rows and columns depict the day of the week. Values denote the shift an employee is assigned to. Missing values (NA) correspond to employee's day off according to the constraints.
This is not the best solution, at first glance, this can be solved with very few employees having shift change within week and the rest to be assigned to a single shift throughout the week. I guess this is due to the fact that any day off followed by a shift assignment is considered as change. Any thoughts?
• typo in constraint 6. (updated) – Psyndrom Ventura Jun 23 at 13:19
• Even with your change from plus to minus in constraint 6, it is not correct when you can have days off. What you have would force $s_{i,j,k}=1$ when the employee works on day $j$ but is off on day $j-1$. – RobPratt Jun 23 at 13:29
You can omit constraint 2 because it is dominated by constraint 1.
Constraint 6 is not correct. For consecutive days, you want $$\sum_l z_{i,j,k,l}+\sum_l z_{i,j-1,k_2,l}-1\le s_{i,j,k},$$ where $$k\not= k_2$$.
For a day off in between, you want $$\sum_l z_{i,j,k,l}+(1-y_{i,j-1})+\sum_l z_{i,j-2,k_2,l}-2\le s_{i,j,k},$$ where $$k\not= k_2$$.
For two days off in between, you want $$\sum_l z_{i,j,k,l}+(1-y_{i,j-1})+(1-y_{i,j-2})+\sum_l z_{i,j-3,k_2,l}-3\le s_{i,j,k},$$ where $$k\not= k_2$$.
• First of alI I have to admit your loyalty :) to all of my questions. I really appreciate this! I guess I can follow your thought but what I miss here is the concept. I do not know apriori where I expect to spot the day off(s), start/end/mid week. I can not implement all suggested constraints simultaneously, right? – Psyndrom Ventura Jun 23 at 13:53
• Yes, you should implement all three suggested constraints simultaneously. The idea is to avoid a pattern like $(w_j,w_{j+1},w_{j+2})=(1,0,1)$ where $w$ is binary, you equivalently want to avoid $(w_j,1-w_{j+1},w_{j+2})=(1,1,1)$, which you can prohibit via a linear "no-good" constraint $w_j+(1-w_{j+1})+w_{j+2} \le 3 - 1$. By the way, you can replace $s_{i,j,k}$ with just $s_{i,j}$. – RobPratt Jun 23 at 15:13
@RobPratt, thank for the clarification. I tried to incorporate the above into my MILP model but I do not get a feasible solution even after half an hour of searching which is strange. Following your explanation, I ended up with something different that returns a feasible solution quickly. Here it is: Introduce decision variable: $$s_{i,k}$$.
and this is the constraint I propose: $$\sum_{j,l} z_{i,j,k,l} \le 7\cdot s_{i,k}$$
If the shift is assigned at least one day then the corresponding decision variable is enabled. Since I want the minimum number of switches I need an objective that is responsible to assign a single shift per employee as much as possible.
Objective function can be altered to: $$\sum_{\min} s_{i,k}$$ This way I can minimize switches in terms of shift regardless of what happens within the week. What is your opinion?
• This looks reasonable, although you might also consider the stronger "disaggregated" formulation $z_{i,j,k,l} \le s_{i,k}$. – RobPratt Jun 30 at 11:13
• Note that your reformulation relies on the fact that every employee works at least one shift, as enforced by constraint 4. – RobPratt Jun 30 at 12:56
• If I use the disaggregated formulation, then this is just simply imposing same shift across days per driver, right? I do not want this because in some cases this is not feasible. – Psyndrom Ventura Jul 1 at 12:54
• Both your proposed constraint and the disaggregated one enforce the logical implication $z_{i,j,k,l}=1 \implies s_{i,k} = 1$. In words, if employee $i$ on day $j$ works shift $k$ in position $l$, then employee $i$ works shift $k$ at least once. I had mistakenly interpreted the $7$ as the number of $j,l$ combinations, which is instead $7\cdot 2=14$, so the disaggregated form is not stronger. Adding up the disaggregated constraints yields your proposed constraint, except with $14$ instead of $7$. Neither form imposes the same shift across days per driver. – RobPratt Jul 1 at 13:57 |
# Initializing a Random System
## Overview
### Questions
• How can I generate a random initial condition?
• What units does HOOMD-blue use?
### Objectives
• Describe the units HOOMD employs in molecular dynamics simulations.
• Demonstrate how to place particles on an initial lattice and randomize the configuration with an MD simulation.
• Explain why initial random velocities are important when using the NVT integration method.
• Show how to use ThermodynamicQuantities to compute properties of the system.
• Address the difference between kinetic temperature and temperature.
## Boilerplate code
[1]:
import itertools
import math
import gsd.hoomd
import hoomd
import numpy
The render function in the next (hidden) cell will render the system state using fresnel.
This is not intended as a full tutorial on fresnel - see the fresnel user documentation if you would like to learn more.
## Procedure
One effective way to initialize a random configuration of particles is to start with a low density non-overlapping configuration and run a simulation that compresses the system to the target density. This section of the tutorial will place particles on a simple cubic lattice and run a short simulation allowing them to relax into the fluid state.
## Units
You need to know what system of units HOOMD-blue uses so that you can place particles at appropriate separations in the initial configuration.
HOOMD-blue does not adopt any particular real system of units. Instead, HOOMD-blue uses an internally self-consistent system of units and is compatible with many systems of units. For example: if you select the units of meter, Joule, and kilogram for length, energy and mass then the units of force will be Newtons and velocity will be meters/second. A popular system of units for nano-scale systems is nanometers, kilojoules/mol, and atomic mass units.
In molecular dynamics, the primary units are length, energy, and mass. Other units are derived from these, for example $$[\mathrm{pressure}] = \left(\frac{\mathrm{[energy]}}{\mathrm{[length]}^3}\right)$$ and $$[\mathrm{time}] = \sqrt{\frac{\mathrm{[mass]}\cdot\mathrm{[length]}^2}{\mathrm{[energy]}}}$$. Some quantities involve physical constants as well, such as charge which has units of $$\sqrt{4\pi\epsilon_0\cdot\mathrm{[length]}\cdot\mathrm{[energy]}}$$ (where $$\epsilon_0$$ is the permittivity of free space), and thermal energy $$kT$$ (where k is Boltzmann’s constant). HOOMD-blue never uses the temperature T directly. Instead it always appears indirectly in the value $$kT$$ which has units of energy.
HOOMD-blue does not perform unit conversions. You provide all parameters in this system of units and all outputs will be given in these units. The documentation for each property and parameter will list the units. For the parameters set in this tutorial so far, the integrator’s dt is in time units, the pair potentials epsilon is in energy units while sigma and r_cut are in length units.
You can interpret these values in the nano-scale units mentioned previously:
Unit
Value
[length]
nanometer
[energy]
kilojoules/mol
[mass]
atomic mass unit
[time]
picoseconds
[volume]
cubic nanometers
[velocity]
nm/picosecond
[momentum]
amu nm/picosecond
[acceleration]
nm/picosecond^2
[force]
kilojoules/mol/nm
[pressure]
kilojoules/mol/nm^3
k
0.0083144626181532 kJ/mol/Kelvin
For example, the values used in this tutorial could represent a system with 1 nanometer diameter particles that interact with a well depth of 1 kilojoule/mol at a thermal energy pf 1.5 kilojoules/mol (which implies $$T \approx 180$$ Kelvin).
## Initial condition
The Lennard-Jones system self-assembles the the fcc structure at moderately high densities. To keep this tutorial’s run time short, it simulates a small number of particles commensurate with the fcc structure (4 * m**3, where m is an integer).
[4]:
m = 4
N_particles = 4 * m**3
In molecular dynamics, particles can theoretically have any position in the periodic box. However, the steepness of the Lennard-Jones potential near $$r \approx \sigma$$ leads to extremely large forces that destabilize the numerical integration method. Practically, you need to choose an initial condition with particles where their hard cores do not overlap. The Lennard-Jones potential used in this tutorial represents a sphere with diameter ~1, so place particles a little bit further than that apart on a KxKxK simple cubic lattice of width L. Later, this section will run an MD simulation allowing the particles to expand and fill the box randomly.
This is the same code you used in Introducing HOOMD-blue tutorial. See that tutorial for a more detailed description.
[5]:
spacing = 1.3
K = math.ceil(N_particles**(1 / 3))
L = K * spacing
x = numpy.linspace(-L / 2, L / 2, K, endpoint=False)
position = list(itertools.product(x, repeat=3))
snapshot = gsd.hoomd.Snapshot()
snapshot.particles.N = N_particles
snapshot.particles.position = position[0:N_particles]
snapshot.particles.typeid = [0] * N_particles
snapshot.configuration.box = [L, L, L, 0, 0, 0]
The single particle type needs a name. Call it A because it is short and the first letter of the alphabet:
[6]:
snapshot.particles.types = ['A']
Here is what the system looks like now:
[7]:
render(snapshot)
[7]:
Write this snapshot to lattice.gsd:
[8]:
with gsd.hoomd.open(name='lattice.gsd', mode='xb') as f:
f.append(snapshot)
## Initialize the simulation
Configure this simulation to run on the CPU:
[9]:
cpu = hoomd.device.CPU()
sim = hoomd.Simulation(device=cpu, seed=1)
sim.create_state_from_gsd(filename='lattice.gsd')
The simulation seed will be used when randomizing velocities later in the notebook.
Set up the molecular dynamics simulation, as discussed in the previous section of this tutorial:
[10]:
integrator = hoomd.md.Integrator(dt=0.005)
cell = hoomd.md.nlist.Cell(buffer=0.4)
lj = hoomd.md.pair.LJ(nlist=cell)
lj.params[('A', 'A')] = dict(epsilon=1, sigma=1)
lj.r_cut[('A', 'A')] = 2.5
integrator.forces.append(lj)
nvt = hoomd.md.methods.NVT(kT=1.5, filter=hoomd.filter.All(), tau=1.0)
integrator.methods.append(nvt)
Assign the integrator to the simulation:
[11]:
sim.operations.integrator = integrator
## Setting random velocities
In HOOMD-blue, velocities default to 0:
[12]:
snapshot = sim.state.get_snapshot()
snapshot.particles.velocity[0:5]
[12]:
array([[0., 0., 0.],
[0., 0., 0.],
[0., 0., 0.],
[0., 0., 0.],
[0., 0., 0.]])
When using the NVT integration method, you must specify non-zero initial velocities. NVT modifies particle velocities by a scale factor so it cannot scale a zero velocity to a non-zero one. The thermalize_particle_momenta method will assign Gaussian distributed velocities consistent with the the canonical ensemble. It also sets the velocity of the center of mass to 0:
[13]:
sim.state.thermalize_particle_momenta(filter=hoomd.filter.All(), kT=1.5)
You can inspect the snapshot to see the changes that thermalize_particle_momenta produced. Use the ThermodynamicQuantities class to compute properties of the system:
[14]:
thermodynamic_properties = hoomd.md.compute.ThermodynamicQuantities(
filter=hoomd.filter.All())
ThermodynamicQuantities is a Compute, an Operation that computes properties of the system state. Some computations can only be performed during or after a simulation run has started. Add the compute to the operations list and call run(0) to make all all properties available without changing the system state:
[15]:
sim.operations.computes.append(thermodynamic_properties)
sim.run(0)
There are $$(3 N_{\mathrm{particles}} - 3)$$ degrees of freedom in the system. The NVT integration method conserves linear momentum, so the - 3 accounts for the effectively pinned center of mass.
[16]:
thermodynamic_properties.degrees_of_freedom
[16]:
765.0
Following the equipartition theorem, the average kinetic energy of the system should be approximately $$\frac{1}{2}kTN_{\mathrm{dof}}$$.
[17]:
1 / 2 * 1.5 * thermodynamic_properties.degrees_of_freedom
[17]:
573.75
[18]:
thermodynamic_properties.kinetic_energy
[18]:
566.7599495184154
Why isn’t this exactly equal? Doesn’t this kinetic energy correspond to a different temperature than was set?
[19]:
thermodynamic_properties.kinetic_temperature
[19]:
1.4817253582180794
No, it doesn’t. The instantaneous kinetic temperature $$T_k$$ ($$kT_k$$ in energy units here) of a finite number of particles fluctuates! The canonical ensemble holds the number of particles, volume, and the thermodynamic temperature constant. Other thermodynamic quantities like kinetic energy (and thus kinetic temperature) will fluctuate about some average. Both that average and the scale of the fluctuations are well defined by statistical mechanics.
## Run the simulation
Run the simulation forward in time to randomize the particle positions. As the simulation progresses, it will move from the initial highly ordered state to a random fluid that fills the box.
[20]:
sim.run(10000)
Here is the final random configuration of particles:
[21]:
render(sim.state.get_snapshot())
[21]:
[22]:
thermodynamic_properties.kinetic_energy
[22]:
565.9683492727562
Here, you can see that the instantaneous kinetic energy of the system has taken on a different value. Depending on the random number seed and other conditions when this notebook runs, this value may be smaller or larger than the expected average value, but it should remain relatively close to the expected average.
Now, you’ve created an initial random configuration of Lennard-Jones particles in a low density fluid. Save the final configuration to a GSD file for use in the next stage of the simulation:
[23]:
hoomd.write.GSD.write(state=sim.state, filename='random.gsd', mode='xb')
The next section of this tutorial will compress this initial condition to a higher density where it will assemble an ordered structure. |
How to create the following table?
I was unable to create the following table
Here is my MWE:
\documentclass{article}
\usepackage[english]{babel}
%\usepackage{multirow}
\usepackage{amsmath}
\begin{document}
\begin{table}[htb!]
\centering
\begin{tabular}{c |c|c|c|c|}
\cline{2-3}
& \multicolumn{2}{ c| }{Figure Here} \\ \cline{2-3}
& First entry & Second entry \\ \cline{1-3}
\multicolumn{1}{ |c| }{First door} & \begin{aligned}[t]\frac{\beta}{\alpha}\end{aligned} & \begin{aligned}[t]\frac{\beta}{\alpha}\end{aligned} \\ \cline{1-3}
\multicolumn{1}{ |c| }{Second door} & \begin{aligned}[t]\frac{\beta}{\alpha}\end{aligned} & \begin{aligned}[t]\frac{\beta}{\alpha}\end{aligned} \\ \cline{1-3}
\end{tabular}
\caption{Ways}
\end{table}
\end{document}
• Also, you use the aligned environment, but don't have whatever to align, and the texts of the code doesn't correspond to the image… – Bernard Nov 26 '17 at 11:51
• @Bernard: I used the aligned environment because I repeatedly get an error when I write $$\frac{\alpha}{\beta}$$ inside the table. – Maryà Nov 26 '17 at 11:56
• @marya - You most definitely shouldn't use $$...$$ inside a tabular environment. Write $\displaystyle ...$ instead. – Mico Nov 26 '17 at 11:58
I'd like to recommend that you give the table a much more open look, mainly by (a) omitting all vertical lines and (b) using fewer, but well-spaced, horizontal lines. Your readers will certainly appreciate it -- and they will likely show their appreciation by actually taking the time to read and understance the table's contents.
\documentclass{article}
\usepackage{amsmath,booktabs}
\begin{document}
\begin{table}[htb!]
\centering
\begin{tabular}{@{}lcc@{}}
\cmidrule[\heavyrulewidth](l){2-3}
& \multicolumn{2}{ c@{}}{Figure Triangle Here} \\
\cmidrule(l){2-3}
& First entry & Second entry \\
\midrule
First door & $\dfrac{\beta\mathstrut}{\alpha}$ & $\dfrac{\beta}{\alpha}$ \\
Second door & $\dfrac{\beta}{\alpha\mathstrut}$ & $\dfrac{\beta}{\alpha}$ \\
\bottomrule
\end{tabular}
\caption{Ways}
\end{table}
\end{document}
Addendum to address the OP's follow-up question: Your objective may be achieved by (a) loading the caption package and (b) replacing
\multicolumn{2}{ c@{}}{Figure Triangle Here}
with
\multicolumn{2}{c@{}}{
\begin{minipage}{3.5cm}
\centering
\includegraphics[width=0.5\textwidth]{triangle.jpg}
\captionof*{figure}{Figure: Triangle}
\end{minipage}}
• I am unable to make a caption of the figure. I added \multicolumn{2}{c @{}}{\includegraphics[]{triangle.jpg}} along with graphicx package. – Maryà Nov 26 '17 at 18:14
• @marya - I'm not sure I understand. First off, there's a table environment, not a figure environment. Second, the code already a \caption statement; did you try to add another one? Please clarify. Regarding the triangle: Did you replace the directive \multicolumn{2}{c@{}}{Figure Triangle Here} with \multicolumn{2}{c@{}}{\includegraphics[width=<some width>]{triangle.jpg}}? – Mico Nov 26 '17 at 18:21
• Yes, I did but I want to add a caption for the figure like in the diagram shown in the question. – Maryà Nov 27 '17 at 5:11
This code works fine without aligned. I suggest to use the cellspace package to give cells some padding (otherwise, the fractions touch the horizontal lines), and the medium-sized fractions from nccmath – in my opinion text fractions look petty in this context. Last remark: since version 3.9 of babel, it is recommended the language option be loaded with the document class, so language-dependent packages be aware of the main language in the document.
\documentclass[english]{article}
\usepackage{babel}
\usepackage{array, cellspace}
\setlength{\cellspacetoplimit}{4pt}
\setlength{\cellspacebottomlimit}{4pt}
\usepackage{amsmath, nccmath}
\begin{document}
\begin{table}[htb!]
\centering
\begin{tabular}{*{4}{Sc|}}
\cline{2-3}
& \multicolumn{2}{Sc| }{Figure Here} \\ %
\cline{2-3}
& First entry & Second entry \\ %
\cline{1-3}
\multicolumn{1}{ |c| }{First door} & $\mfrac{\beta}{\alpha}$ & $\mfrac{\beta}{\alpha}$ \\ \cline{1-3}
\multicolumn{1}{ |c| }{Second door} & $\frac{\beta}{\alpha}$ & $\frac{\beta}{\alpha}$ \\ %
\cline{1-3}
\end{tabular}
\caption{Ways}
\end{table}
\end{document} |
# [texhax] Vista
Michael Barr barr at math.mcgill.ca
Mon Sep 3 18:47:07 CEST 2007
Has anyone gotten tex (and, especially mf) to run correctly under Vista?
As brief as I can make it, here is my experience. I accepted a full
installation from the TeXLive 2007 CD (or DVD, I didn't pay attention).
It seemed to install properly and I got tex working with no problem (I had
to install xypic and my own diagxy). When I tired to run the viewer, I
got mysterious error messages I wrote about previously. I tried to use a
suggestion from Reinhard Kotucha to use pdftex. Instead I tried dvipdfm,
but I don't think there is any essential difference. Here is the error
message I got:
[c:\math\tac]dvipdfm isbell1
isbell1.dvi -> isbell1.pdf
[1kpathsea: Running mktexpk --mfmode / --bdpi 600 --mag 1+0/600 --dpi 600
xyatip
10
mktexmf: empty or non-existent rootfile!
mktexpk: don't know how to create bitmap font for xyatip10.kpathsea:
Appending f
ont creation commands to missfont.log.
xyatip10: Can't locate a Type 1, TTF, PK, or virtual font file
Not sure how to proceed. For now this is fatal
Maybe in the future, I'll substitute some other font.
Output file removed.
I gave up and copied the entire miktex directory from my old computer
(there were nearly 600 MB and it took 22 minutes) and tried running it.
When I ran yap, I got a mysterious message about a missing font. Well, the
font is there and it is where mf put it running under yap on my old
computer. The display seemed to be correct except for missing characters
from the Zapf chancery fonts, which are in the location
and yap was unable to make them either. There is an error message from
yap located somewhere, but well hidden unless you know exactly where to
look.
Incidentally, AFAIK, TeX works fine; it is only the fonts that are a
problem. Since I had copied the directory in toto, it didn't seem
necessary to run mktexlsr, but I did anyway after the first failure. My
old computer has no relevant environment variables (except the path) set
so that cannot be the problem.
My best guess is that some permissions are not allowed, although that does
not explain yap's failure to find just one font.
Michael Barr |
Ⴂ܂@QXg
### STORYXg[gANpc
Detail
Color
@STORY
@AN
nFȂ
Xgb`F
ňF
_炩F_炩
tBbgF
߃V[YF3`9
fFg166cm
pTCYFMTCY
@
@@
pcグT[rXɂĂ͂
@
@ @@ @@@ @@@@
@@@@ @@@ @
f |GXe91 [8|E^1 J{WA
@ @ @ @ @ @ @@ @@ @ @ @ @ @ @ @ @@ @@ @ @ @ @ @ @ @ @@ @@ @ @ @ @ @ @ @ @@ @@ @ @ @ @ @ @ @ @@ @@ @ @ @ @ @ @ @ @@ @@ @ @ @ @ @ @ @ @@ @@ @ @ @ @ @ @ @ @@ @@ @
Size EGXg qbv 킽 ҏ ҉ 5 65 87 27.7 23.2 65 16 7 68 90 28.9 23.5 65 16.5 9 71 93 30 23.7 65 17 11 74 96 31.1 23.9 65 17.5 13 77 99 32.1 24.2 65 18 15 80 102 33.4 24.5 65 18.5 17 83 105 34.5 24.7 65 19
i^Oɕ\LĂTCY
pڈ̌^ƂȂ܂
TCY\̊êɂ
# STORYXg[gANpc
iԍ@612H3109
10,780~(ō)
[98|Cgi ]
J[_TCY 5 size 7 size 9 size 11 size 13 size 15 size 17 size
CgO[ ~
x[W ~ ~ ~ ~
XJCu[ ~
i7ȓɂAꍇ̂ݕԕiEɉ܂B |
# Lesson 11
Slicing Solids
Let's see what shapes you get when you slice a three-dimensional object.
### Problem 1
A cube is cut into two pieces by a single slice that passes through points $$A$$, $$B$$, and $$C$$. What shape is the cross section?
### Problem 2
Describe how to slice the three-dimensional figure to result in each cross section.
Three-dimensional figure:
Cross sections:
### Problem 3
Here are two three-dimensional figures.
Describe a way to slice one of the figures so that the cross section is a rectangle.
### Problem 4
Each row contains the degree measures of two supplementary angles. Complete the table.
measure of an angle measure of its supplement
$$80^\circ$$
$$25^\circ$$
$$119^\circ$$
$$x$$
(From Unit 1, Lesson 12.) |
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.
# Glacial lakes exacerbate Himalayan glacier mass loss
## Abstract
Heterogeneous glacier mass loss has occurred across High Mountain Asia on a multi-decadal timescale. Contrasting climatic settings influence glacier behaviour at the regional scale, but high intra-regional variability in mass loss rates points to factors capable of amplifying glacier recession in addition to climatic change along the Himalaya. Here we examine the influence of surface debris cover and glacial lakes on glacier mass loss across the Himalaya since the 1970s. We find no substantial difference in the mass loss of debris-covered and clean-ice glaciers over our study period, but substantially more negative (−0.13 to −0.29 m w.e.a−1) mass balances for lake-terminating glaciers, in comparison to land-terminating glaciers, with the largest differences occurring after 2000. Despite representing a minor portion of the total glacier population (~10%), the recession of lake-terminating glaciers accounted for up to 32% of mass loss in different sub-regions. The continued expansion of established glacial lakes, and the preconditioning of land-terminating glaciers for new lake development increases the likelihood of enhanced ice mass loss from the region in coming decades; a scenario not currently considered in regional ice mass loss projections.
## Introduction
Glacier mass loss has occurred across large parts of High Mountain Asia over at least the last four decades1,2,3,4, although substantial spatial variability has been documented in the magnitude of glacier mass loss in the region. Glaciers in the Karakoram, Kunlun Shan and eastern Pamir have maintained mass balance to the present day3,5,6,7, whereas glaciers located in the Himalaya, in the Tien Shan and Nyainqentanghla have experienced substantial mass loss in recent decades2,6. The disparity in regional mass loss rates has been attributed to the diminished sensitivity to warming of glaciers in the Karakoram, Kunlun Shan and eastern Pamir due to their accumulation of snowfall in winter months, rather than during the summer monsoon along the Himalaya8. However, large intra-regional variability in glacier mass loss is evident along the Himalayan arc6,9, which suggests factors exist that are capable of exacerbating glacier recession in addition to climatic change here.
Glaciers situated in the Himalaya commonly have extensive debris cover10, and an increasing number terminate into a glacial lake11. A continuous debris mantle thicker than a few centimetres dampens sub-debris ablation rates12. Modelling studies have shown how debris cover enables the persistence of greater glacier area in comparison with clean-ice in a changing climate13,14. However, comparable thinning rates have been observed for clean-ice and debris-covered glaciers at similar elevations15,16,17 at several locations in the Himalaya.
Glacial lakes amplify ice loss from their host glaciers through mechanical calving and subaqueous melt18,19. There are currently more than 700 proglacial lakes in the Himalaya11,20, which are all capable of directly influencing the behaviour of their host glacier. Proglacial lake area expanded by >50% in the Himalaya between 1990 and 201511. Enhanced glacier area reductions have been observed from lake-terminating glaciers in the Sikkim Himalaya21, and elevated glacier mass loss from lake-terminating glaciers has recently been confirmed as a region-wide phenomenon9,22,23. However, a comprehensive analysis of the impact of glacial lakes on glacier retreat and mass loss rates in the Himalaya is still lacking.
The main aim of this study is therefore to examine the influence of a debris mantle and glacial lake development on the long-term evolution of Himalayan glaciers in detail, in order to improve our understanding of the regional variability of ice loss rates. We quantify mass loss and terminus retreat from lake and land-terminating glaciers along the Himalayan arc since the 1970s, using optical and radar based remotely-sensed datasets. We use these data to discuss the role of debris cover and glacial lakes as drivers of glacier mass loss in the Himalaya and consider the future evolution of glaciers in the region.
## Glacier Mass Balance and Ice Front Retreat Rates
We generated geodetic glacier mass balance estimates for two periods using digital elevation models (DEM) derived from Hexagon KH-9 stereoscopic imagery (spanning the period 1973–1976, supplementary information), the Shuttle Radar Topographic Mission DEM (2000), and 499 DEMs generated from WorldView and Geoeye optical stereo pairs (spanning the period 2012–201624). Our assessment of contemporary (hereafter 2000–~2015) glacier mass loss rates covers a continuous swath from Jammu and Kashmir in the West of the Himalaya, to the Arunachal Pradesh in the Far East of the Himalaya (Fig. 1) and encompasses 1275 glaciers greater than 1 km2 in area (7450 km2 in total). Our assessment of 1973–6 to 2000 (hereafter ~1974–2000) glacier mass loss focusses on six regions (Fig. 1) and includes mass loss estimates for 939 glaciers (4834 km2 glacier area). We paired glacier mass balance data with estimates of glacier ice front retreat for a subset of 325 glaciers located in the same areas covered by Hexagon data. Ice front retreat was measured between the date of the Hexagon (1973–6), Landsat (1999–2002) and Sentinel imagery (2017/18).
## Results: Temporal Variability in Glacier Mass Loss
Pervasive increases in ice mass loss and divergent ice mass loss depending on glacier terminus type are both evident in our results (Fig. 2). The mean mass balance of all glaciers within our sample over the period ~1974–2000 was −0.25 ± 0.09 m water equivalent (w.e) a−1, ranging from −0.20 ± 0.08 to −0.29 ± 0.10 m w.e.a−1. The mean mass balance of all glaciers between 2000 and ~2015 was −0.39 ± 0.12 m w.e.a−1, ranging from −0.26 ± 0.11 to −0.54 ± 0.20 m w.e.a−1 (Table 1). Glacier mass loss rates increased without exception in our study regions (Table 1, Fig. 1). Our results are in tendency in line with those of9, although our data do not support their finding that contemporary ice loss rates have increased to double those of the ~1974 to 2000 period (from −0.25 ± 0.09 to −0.39 ± 0.12 m w.e.a−1), as9 suggest (from −0.22 ± 0.13 to −0.43 ± 0.14 m w.e.a−1), particularly considering the levels of uncertainty associated with the mass balance data.
### The role of debris cover in glacier evolution
To examine the relative importance of the presence of a debris mantle on glacier mass loss rates, we subdivided our mass balance datasets depending on debris extent, following the approach of22 (methods). Akin to22, we find no significant difference between thinning rates (Fig. 3) or mass balance of land-terminating glaciers with and without substantial debris cover. Over the period 2000–~2015 clean-ice, land-terminating glacier mass balance was −0.35 ± 0.12 m w.e.a−1, whereas debris-covered, land-terminating glacier mass balance was slightly more negative at −0.41 ± 0.12 m w.e.a−1. Further to22, we find similar mass loss rates irrespective of debris cover extent over the period ~1974–2000. Clean-ice, land-terminating glacier mass balance was −0.22 ± 0.08 m w.e.a−1, whereas debris-covered, land-terminating glacier mass balance was again slightly more negative at −0.29 ± 0.08 m w.e.a−1. These results show that similar ice loss from debris-covered compared to debris-free glaciers is not a recent phenomenon. Using unpaired, two-tailed t-tests, we examined the statistical characteristics of the differences between mass balance estimates for debris-covered and clean-ice glaciers (supplementary tables 7 and 8). In five of our eight sub-regions, we find little evidence of significant differences in the mass balance of debris-covered and clean-ice glaciers (p > 0.05, t 0.05–1.35) over the period 2000–~2015 (supplementary table 8). The coverage of our ~1974–2000 mass balance dataset did not allow for the statistical analyses of differences in all sub-regions, but in four cases we find little evidence of significant differences in the mass balance of debris-covered and clean-ice glaciers (p > 0.05, t 1.07–1.91) over this study period.
### Terminus type variability in ice loss
The mean mass balance of lake-terminating glaciers was substantially more negative than that of land-terminating glaciers (Table 1), thus we focus the remainder of our analyses on the impact of glacier-lake interactions on glacier mass loss. Over the period ~1974–2000, lake-terminating glacier mass balance (mean −0.32 ± 0.12 m w.e.a−1) more negative than land-terminating glacier mass balance (−0.23 ± 0.09 m w.e.a−1) across the Himalaya (Fig. 2), ranging from 0.03 m w.e.a−1 (Central West 1) to 0.13 m w.e.a−1 (East) for specific regions (Table 1). Over the period 2000–~2015, the difference between lake-terminating glacier mass balance (−0.55 ± 0.12 m w.e.a−1) and land-terminating glacier mass balance (−0.37 ± 0.12 m w.e.a−1) was double that of the earlier time period, again varying from 0.10 m w.e.a1 (West) to 0.29 m w.e.a1 (Central West 1) for different sub-regions (Table 1). The mass balance of debris-covered (−0.51 ± 0.12 m w.e.a−1) and clean-ice (−0.67 ± 0.15 m w.e.a−1) lake-terminating glaciers were both substantially more negative than land-terminating, debris-covered (−0.41 ± 0.12 m w.e.a−1) and clean-ice glaciers (−0.35 ± 0.12 m w.e.a−1) (supplementary table 4), thus terminus type appears to exert a much stronger influence on glacier mass balance than debris extent.
Again we examined the statistical characteristics of terminus-type dependant differences in our mass balance datasets (supplementary tables 7 and 8). In two out of three sub-regions tested for the period ~1974–2000 p < 0.05, although t-values were low (2.05–2.51), suggesting a less robust relationship between terminus type and mass loss rates over this earlier period. In the five sub-regions where data quantity allowed for statistical analyses, terminus type dependant differences in mass balance were all significant (p < 0.05, t 2.65–5.88) over the period 2000–~2015, which suggests the much greater impact of glacial lake growth on glacier mass loss rates towards the present day over large parts of the Himalaya.
Glacier terminus retreat accompanied the widespread glacier thinning across the Himalaya (Figure 3). Over the period ~1974–2000, land-terminating glaciers retreated at a mean rate of 7.1 ± 1.1 m a−1, ranging only slightly between regions (Supplementary Table 6). Lake-terminating glaciers retreated at a mean rate of 15.9 ± 1.1 m a−1 over the same period. Glacier terminus retreat rates increased without exception across the two time periods, to a mean rate of 10.4 ± 1.4 m a−1 for land-terminating glaciers and 26.8 ± 1.4 m a−1 for lake-terminating glaciers, respectively, over the period 2000–2018 (Supplementary Table 5). The retreat rate of land-terminating glaciers increased on average by ~46% between the two study periods, whereas lake-terminating glacier retreat rates increased by almost 70%. Along glacier centrelines (see methods), land-terminating glaciers reduced in length by a mean value of 9%, ranging from no change (where heavily debris-covered) to 33%, between the 1970s and 2018. Lake-terminating glacier length reduced by a mean of 13%, ranging from <1 to 49%, over the same period.
Examination of the altitudinal distribution of glacier surface elevation changes shows ice loss at the glacier-lake interface to be the main driver of the enhanced mass loss from lake-terminating glaciers (Fig. 3). Thinning rates of ~1 m a−1 were pervasive for ablation zones of land-terminating glaciers across the Himalaya (Fig. 3) over the period 2000–~2015. In contrast, lake-terminating glaciers thinned by up to 4 m a−1 at their termini in some regions (Eastern Himalaya), and large portions of their ablation zones thinned at a greater rate than land-terminating glaciers. Similar thinning patterns are evident for glaciers of different terminus type over the period ~1974–2000 (Fig. 3), although thinning rates were of lesser magnitude. Land-terminating glacier ablation zones thinned at a rate of ~0.5 m a−1 over the period ~1974–2000, whereas lake-terminating glacier ablation zones lowered at a mean rate of ~1 m a−1 over the same period.
Lake-terminating glaciers constituted only a small portion of the glacier population in each region, yet they were responsible for a substantial amount of the regional ice mass loss, across both study periods (Table 2). Lake-terminating glaciers accounted for ~32% of the ice mass loss in our Central West 1 study area (Fig. 1) over the period ~1974–2000, despite just ~9% of the glacier population terminating into a lake. Lake-terminating glaciers in the Central 1, the Central East and East Himalaya contributed ~20% of the total regional ice mass loss whilst accounting for 11–14% of the glacier population over the period ~1974–2000. The contribution of lake-terminating glaciers to intra-regional ice mass loss budgets increased by ~21% after 2000, where glacial lakes are prevalent. Lake-terminating glaciers in Central West 1, Central East and East Himalaya provided similar proportions (30, 30 and 29%, respectively) of the total regional mass loss over this period (Table 2). The regional mass balance in the Central West 2 region, where only a few lake-terminating glaciers are situated, remained almost unchanged (−0.24 ± 0.11 Vs −0.26 ± 0.11 m w.e. a−1) between the two study periods.9 estimated that only 5–6% of the total ice mass loss from the entire Himalaya is provided by lake-terminating glaciers, although their analyses is limited to glaciers >3 km2 in size, and9 show that smaller lake terminating glaciers generally display the most negative mass balance.
We measured comparable mass loss rates from glaciers in the (West) Himalaya, where few glacial lakes are situated, to regions where glacial lakes have exacerbated ice mass loss (Table 1). Glaciers in Garwhal Himalaya exist in a unique climatological setting. They receive the majority of their precipitation from mid-latitude winter westerlies1,25, but experience mean annual temperatures more akin to the central Himalaya, rather than the colder Karakoram8. The sensitivity of snowfall to warming is therefore higher in this region, and long-term temperature increases8,26 have heavily impacted both seasonal snowfall26,27 the phase of summer precipitation17,28, and therefore glacier mass balance in this region.
## Discussion: Implications for Future Glacier Evolution
Our results clearly emphasise the strong impact of glacial lake development on glacier recession along the Himalaya since the mid-1970s, alongside atmospheric warming9. Over this period, lake-terminating glacier mass balance was substantially more negative than that of land-terminating glaciers, and lake-terminating glacier termini retreated at twice the rate of their land-terminating counterparts. Although lake-terminating glaciers make up only a small portion of the total glacier population (~10%), they are responsible for a disproportionate share of intra-regional ice mass loss. Where lake-terminating glaciers are most prevalent (Central West 1, Central 1, Central East and East Himalaya), lake-terminating glacier recession accounted for almost 30% of the total ice mass loss, despite comprising only ~15% of the glacier population, over the period 2000–~2015. This contribution increased from ~23% over the period ~1974–2000, when ~11% of the glacier population terminated into glacial lakes. Statistical analyses of our mass balance datasets also indicate the now widespread influence of glacier terminus type on glacier mass loss rates. Where glacial lakes were not prevalent (Central West 2), regional mass loss rates have remained steady over the last four decades.
The magnitude of the contribution of lake-terminating glaciers to regional ice loss is unlikely to diminish in coming decades, given the sustained expansion of currently proglacial lakes across the Himalaya11,20,29, and the preconditioning of many debris-covered, land-terminating glacier surfaces for meltwater storage.30 suggest that the transition of many debris-covered glaciers from land-terminating to lake-terminating is a likely scenario in the later stages of glacier wastage. Indeed, more than 25% of the debris-covered glaciers we examined hosted glacial lakes, and debris-covered, lake-terminating glaciers displayed the highest mass loss rates of all glaciers we surveyed (−0.67 ± 0.15 m w.e.a−1, supplementary table 4). Widespread glacier surface velocity reductions31, sustained glacier thinning (Fig. 3) and associated surface slope reductions32 will allow for the formation of more extensive supraglacial pond networks on many debris-covered glaciers, which will eventually coalesce to become pro-glacial lakes31. The heightened mass loss from such glaciers will sustain their contribution to the regional mass loss budget in coming decades.
Our results show that several decades of enhanced ice loss is possible whilst glacier-lake interactions drive the dynamic evolution of such glaciers. Increased thinning rates and amplified terminus retreat rates (Fig. 3) were documented for the majority of the population of lake-terminating glaciers we assessed over the >40 year study period. The amplified thinning towards lake-terminating termini is due to the occurrence of both mechanical calving and subaqueous melt18,19. The increase in thinning rates over lake-terminating glaciers across the two study periods (Fig. 3) is likely to have been driven by the increased areal extent11,29 and the depth33 of glacial lakes across the region in recent decades. Increased proglacial lake depth exacerbates calving fluxes18,34 and increases the glacier-lake contact area prone to subaqueous melt and can also influence glacier flow rates32, which increases ice fluxes towards the lake each glacier hosts. The dynamic behaviour of lake-terminating glaciers is in stark contrast to land-terminating glaciers along the Himalaya, which have experienced substantial velocity reductions in response to thinning and driving stress reductions since 200031.
The comparability of ice loss rates from debris-covered and clean-ice glaciers suggests that localised ablative processes, such as ice cliff and supraglacial pond expansion35,36,37, have contributed substantially to individual glacier mass budgets for much longer than previously thought, even during times of less negative glacier mass balance. Estimates of the contribution of ice-cliff backwasting to individual glacier ablation budgets in the Himalaya range from 7–40%36,37,38. suggest that the absorption and redistribution of energy by supraglacial ponds may account for 6–19% of surface ablation on debris-covered glaciers in the Langtang catchment. In combination, these processes may drive substantial ablation in heavily debris-mantled areas of glaciers. Pervasive glacier stagnation31 may also be contributing to the comparability of debris-covered and clean-ice glacier thinning rates, with reduced emergence velocities in debris-covered areas38 aiding thinning. Disentangling the contribution of each ablative process is key to understanding the evolution of debris-covered, land-terminating glaciers in the Himalaya.
In order to understand whether the contribution of lake-terminating glaciers to regional ice mass loss may increase further, both the prevalence of the formation of new glacial lakes, and the impact of multi-decadal glacier thinning on the dynamics of lake-terminating glaciers need to be better understood. If lake-terminating glacier behaviour is not considered in future ice mass loss scenarios, ice mass loss from the Himalaya, and other regions where glacial lakes are common, may be substantially underestimated.
## Methods
### DEM pre-processing and dh/dt correction
The methods of39 were followed to eliminate planimetric and altimetric shifts from HMA DEMs and Hexagon KH-9 DEMs. The non void-filled, 30 m resolution SRTM DEM (https://earthexplorer.usgs.gov/) was used as the reference DEM and the RGI V6.0 glacier inventory40, which was modified manually to reflect glacier extent visible in the Hexagon imagery from the 1970s, was used to isolate dh/dt data over stable ground from which shift vectors were calculated. Along-track and cross-track biases were not prevalent in HMA DEMs. To remove tilts from Hexagon KH-9 DEMs, a second order global trend surface was fitted to non-glacierised terrain, considering elevation differences between ±150 m and inclination ≤15°28. Following the coregistration of DEMs from different epochs, individual DEMs were differenced to obtain elevation change data over different time periods.
The SRTM DEM is known to have underestimated glacier surface elevations due to C-band radar penetration41. Failure to correct such a penetration bias may cause a 20% underestimate in regional mass balance estimates42. We corrected dh/dt data derived using the SRTM DEM using the penetration estimates of15, which were estimated through the reconstruction of glacier surface elevations at the point of SRTM acquisition via the extrapolation of a time series of IceSat data (spanning the period 2003–2009), with the difference between the two datasets assumed to represent C-band penetration depths. The direct validation of SRTM penetration depth estimates are difficult due to the lack of information available about spatially variable glacier surface conditions (snowpack depth and extent) at the time of SRTM DEM acquisition. We compared our geodetic mass balance estimates with those derived using alternative methods and baseline datasets not affected by C-band radar penetration (Supplementary Table 3), and find a mean difference of −0.02 m w.e. a−1 (ranging from −0.12 to + 0.08 m w.e. a−1) between estimates of regional mass loss over directly comparable time periods generated by6. This suggests the successful elimination of C-band radar penetration biases.
The derivation of geodetic mass balance estimates involves the summation of glacier mass loss or gain over the entirety of a glacier’s surface. Variable glacier surface conditions and the extreme topography of glacierised mountain regions means data gaps and anomalous surface elevation values are common in DEMs generated from remotely-sensed imagery. Data gaps and anomalies are inherited by glacier surface elevation change data once DEMs from two time periods are differenced, and they must be filled or removed through filtering for glacier mass loss to be captured accurately.
The approach of43 was employed to filter the surface elevation change data generated using Hexagon KH-9 data. This approach involves the filtering of surface elevation change data depending on the standard deviation of elevation changes, weighted by an elevation dependent coefficient. The approach of43 allows for stricter filtering of elevation change data at higher elevations, where outliers arising from poor image contrast in glacier accumulation zones are common and where the magnitude of elevation changes are expected to be lower. More lenient filtering of elevation change data is required over glacier ablation zones, where optical contrast and therefore Hexagon DEM quality was higher.
The improved spatial and spectral resolution of the WorldView and Geoeye imagery in comparison to the Hexagon data means superior coverage of DEMs was available over glacier accumulation zones in our later study period. The remnant anomalies present in our contemporary (SRTM-HMA) surface elevation change dataset, mainly resulting from errors in the SRTM DEM, were eliminated following the simpler approach of44. The approach of44 involves the removal of values greater than +/− 3 standard deviations of the mean elevation change in 100 m altitudinal bins through the elevation range of glacierised terrain.
We employed a two-step gap filling approach; first we used a 4 × 4 cell moving window to fill small (a few pixels) data gaps with mean elevation change data from neighbouring cells. We then filled larger data gaps with median values of surface elevation change calculated across each 100 m increment of the glaciers elevation range. Both approaches have been shown to have limited impact on glacier mass loss estimation45. Data gaps were most prevalent in surface elevation change data derived from Hexagon imagery, varying from 5.5–14.5% of glacier area for different sub-regions. We converted surface elevation change data to ice volume considering the grid size of our dh/dt data (30 m pixels), and then to glacier mass change using a conversion factor of 850 ± 60 kg m−3 46.
### Glacier mass balance subdivision
We divided our samples of glacier mass balance depending on their terminus type and debris extent. Terminus type was determined manually using satellite imagery from each date as reference, with contact required between a proglacial lake and its host glacier to allow for its classification of lake-terminating. We replicated the approach of22 to divide our mass balance data depending on debris-extent, and classified glaciers as debris-covered where more than 19% of their area was mantled by debris, and as clean-ice otherwise, using the supraglacial classification of10.
### Mass balance uncertainty
Our mass balance uncertainty (σΔm) estimates consider and combine the uncertainty associated with surface elevation change (EΔh), the uncertainty associated with volume to mass conversion (EΔm), and the spatially nonuniform distribution of uncertainty.
The uncertainty associated with elevation change (EΔh) was calculated through the derivation of the standard error - the standard deviation of the mean elevation change - of 100 m altitudinal bands of elevation difference data35,45:
$${E}_{\varDelta h}=\frac{{\sigma }_{stable}}{\sqrt{{\rm{N}}}}$$
Where σstable is the standard deviation of the mean elevation change of stable, off-glacier terrain, and N is the effective number of observations46. N is calculated through:
$$N=\frac{{N}_{tot}\cdot PS\,}{2d}$$
Where Ntot is the total number of DEM difference data points, PS is the pixel size and d is the distance of spatial autocorrelation, taken here to equal 20 pixels (600 m). EΔm was calculated as 7% of the mass loss estimate47 for each glacier and summed quadratically with EΔh:
$${{\rm{\sigma }}}_{\varDelta {\rm{m}}}=\sqrt{{{\rm{E}}}_{\varDelta {\rm{h}}}^{2}+{E}_{\varDelta m}\,}$$
σΔm was then weighted depending on glacier hypsometry in each region to better represent the spatial variability of uncertainty35.
### Glacier terminus mapping
Glacier termini were mapped for three different epochs using the same six Hexagon KH-9 scenes used in DEM generation, 7 Landsat TM/ETM+ scenes spanning the period 1999–2002, and 6 Sentinel 2 A/B scenes spanning the period 2016 to 2018 (Supplementary Table 2). We also used 8 orthorectified Corona KH-4B images analysed by48 to map glacier termini in Himachal Pradesh (West Himalaya). Glacier termini were mapped in a semi-automated fashion using the approach of49, which involves the manual digitisation of glacier termini, the division of the ice front into points of even spacing, and the measurement of the distance between terminus points to a reference location placed up glacier. We generated glacier centreline profiles for the extent of glaciers in the Hexagon imagery following the approach of50 to quantify the impact of terminus retreat on glacier length over the study period.
### Glacier terminus change uncertainty
We followed the approach of51 to estimate the uncertainty associated with terminus retreat rates, whereby:
$$e=\sqrt{{({\rm{PS}}1)}^{2}+{({\rm{PS}}2)}^{2}}+{{\rm{E}}}_{{\rm{reg}}}$$
Where e is the total error in terminus position, PS1 is the pixel size of imagery from the first epoch, PS2 is the pixel size of imagery from the second epoch, and Ereg the coregistration error between images, which we assume to be half a pixel52.
## References
1. 1.
Bolch, T. et al. The state and fate of Himlayan glaciers. Science 336, 310–314 (2012).
2. 2.
Farinotti, D. et al. Substantial glacier mass loss in the Tien Shan over the past 50 years. Nat. Geosci. 8, 716–722 (2015).
3. 3.
Bolch, T. et al. Status and Change of the Cryosphere in the Extended Hindu Kush Himalaya Region. In Philippus Wester, Arabinda Mishra, Aditi Mukherji & Arun Bhakta Shrestha (Eds.): The Hindu Kush Himalaya Assessment: Mountains, Climate Change, Sustainability and People. Springer International Publishing, pp. 209–255 (2019).
4. 4.
Zhou, Y., Li, Z., Li, J., Zhao, R. & Ding, X. Glacier mass balance in the Qinghai-Tibet Plateau and its surroundings from the mid 1970s to 2000 based on Hexagon KH-9 and SRTM DEMs. Rem. Sens. Environ. 210, 96–112 (2018).
5. 5.
Kääb, A., Triechler, D., Nuth, C. & Berthier, E. Contending estimates of 2003–2008 glacier mass balance over the Pamir-Karakoram-Himalaya. Cryosphere. 9, 557–564 (2015).
6. 6.
Brun, F., Berthier, E., Wagnon, P., Kääb, A. & Treichler, D. A spatially resolved estimate of High Mountain Asia glacier mass balances from 2000 to 2016. Nat. Geosci. 10, 668–673 (2017).
7. 7.
Bolch, T., Pieczonka, T., Mukherjee, K. & Shea, J. Glaciers in the Hunza catchment (Karakoram) have been nearly in balance since the 1970s. Cryosphere 11, 531–539 (2017).
8. 8.
Kapnick, S. B., Delworth, T. L., Ashfaq, M., Malyshev, S. & Milly, P. C. D. Snowfall less sensitive to warming in the Karakoram than in Himalayas due to a unique seasonal cycle. Nature. 7, 834–840 (2014).
9. 9.
Maurer, J., Schaefer, J. M., Rupper, S. & Corley, A. Acceleration of ice loss across the Himalayas over the past 40 years. Sci. Adv., 5 (2019).
10. 10.
Kraaijenbrink, P. D. A., Bierkens, M. F. P., Lutz, A. F. & Immerzeel, W. W. Impact of a global temperature rise of 1.5 degrees Celsius on Asia’s glaciers. Nature, 549 (2017).
11. 11.
Nie, Y. et al. A regional-scale assessment of Himalayan glacial lake changes using satellite observations from 1990–2015. Remote Sense. Environ. 189, 1–13 (2017).
12. 12.
Nicholson, L. & Benn, D. I. Calculating ice melt beneath a debris layer using meteorological data. J. Glacio. 52, 463–470 (2006).
13. 13.
Anderson, L. S. & Anderson, R. S. Modeling debris-covered glaciers: responds to steady debris deposition. Cryosphere 10, 1105–1124 (2016).
14. 14.
Rowan, A. V., Egholm, D. L., Quincey, D. J. & Glasser, N. F. Modelling the feedbacks between mass balance, ice flow and debris transport to predict the response to climate change of debris-covered glaciers in the Himalaya. Earth. Plan. Sci. Lett. 430, 427–438 (2015).
15. 15.
Kääb, A., Berthier, E., Nuth, C., Gardelle, J. & Arnaud, Y. Contrasting patterns of early twenty-first-century glacier mass change in the Himalayas. Nature 488, 495–498 (2012).
16. 16.
Nuimura, T., Fujita, K., Yamaguchi, S. & Sharma, R. R. Elevation changes of glaciers revealed by multitemporal digital elevation models calibrated by GPS survey in the Khumbu region, Nepal Himalaya, 1992–2008. J. Glacio. 58, 648–656 (2012).
17. 17.
Pratap, B., Dobhal, D. P., Mehta, M. & Bhambri, R. Influence of debris cover and altitude on glacier surface melting: a case study on Dokriani Glacier, central Himalaya, India. Ann. Glacio., 56 (2015).
18. 18.
Benn, D. I., Warren, C. R. & Mottram, R. H. Calving processes and the dynamics of calving glaciers. Earth. Sci. Rev. 82, 143–179 (2007).
19. 19.
Truffer, M. & Motyka, R. J. Where glaciers meet water: Subaqueous melt and its relevance to glaciers in various settings. Rev. Geophys. 54, 220–239 (2016).
20. 20.
Zhang, G. et al. An inventory of glacial lakes in the Third Pole region and their changes in response to global warming. Global and Planetary Change 131, 148–157 (2015).
21. 21.
Basnett, S., Kulkarni, A. & Bolch, T. The influence of debris cover and glacial lakes on the recession of glaciers in Sikkim Himalaya, India. J. Glaciol. 59, 1035–1046 (2013).
22. 22.
Brun, F. et al. Heterogeneous influence of glacier morphology on the mass balance variability in High Mountain Asia. J. Geophys. Res. Earth. Surf., (2019).
23. 23.
Song, C. et al. Heterogeneous glacial lake changes and links of lake expansions to the rapid thinning of adjacent glacier termini in the Himalayas. Geomorphology 280, 30–38 (2017).
24. 24.
Shean, D. E. et al. An automated, open-source pipeline for mass production of digital elevation models (DEMs) from very-high-resolution commercial stereo satellite imagery. J.Photogram. Remote. Sens. 116, 101–117 (2016).
25. 25.
Vijay, S. & Braun, M. Early 21st century spatially detailed elevation changes of Jammu and Kashmir glaciers (Karakoram-Himalaya). Glob. Plan. Change. 165, 137–146 (2018).
26. 26.
Shekhar, M. S., Chand, H., Kumar, S., Srinivasan, K. & Ganju, A. Climate-change studies in the western Himalaya. Ann. Glacio. 51, 105–112 (2010).
27. 27.
Yao, T. et al. Different glacier status with atmospheric circulations in Tibetan Plateau and surroundings. Nature Clim. Change. 2, 663–667 (2012).
28. 28.
Pieczonka, T., Bolch, T., Wie, J. & Liu, S. Heterogeneous mass loss of glaciers in the Aksu-Tarim Catchment (Central Tien Shan) revealed by 1976 KH-9 Hexagon and 2009 SPOT-5 stereo imagery. Remote Sen. Environ. 130, 233–244 (2013).
29. 29.
Khadka, N., Zhang, G. & Thakuri, S. Glacial Lakes in the Nepal Himalaya: Inventory and Decadal Dynamics (1977–2017). Remote Sens., 10 (2018).
30. 30.
Benn, D. et al. Response of debris-covered glaciers in the Mount Everest region to recent warming, and implications for outburst flood hazards. Earth-Sci. Rev. 114, 156–174 (2012).
31. 31.
Dehecq, A. et al. Twenty-first century glacier slowdown driven by mass loss in High Mountain Asia. Nat. Geosci. 12, 22–27 (2019).
32. 32.
King, O., Dehecq, A., Quincey, D. J. & Carrivick, J. L. Contrasting geometric and dynamic evolution of lake and land-terminating glaciers in the central Himalaya. Glob. Plan. Change. 167, 46–60 (2018).
33. 33.
Somos-Valenzuela, M. A., McKinney, D. C., Rounce, D. R. & Byers, A. C. Changes in Imja Tsho in the Mount Everest region of Nepal. Cryosphere. 8, 1661–1671 (2014).
34. 34.
Kirkbride, M. & Warren, C. R. Calving processes at a grounded ice cliff. Ann. Glacio. 24, 116–121 (1997).
35. 35.
Ragettli, S., Bolch, T. & Pellicciotti, F. Heterogeneous glacier thinning patterns over the last 40 years in Langtang Himal, Nepal. Cryosphere. 10, 2075–2097 (2016).
36. 36.
Thompson, S., Benn, D. I., Mertes, J. & Luckman, A. Stagnation and mass loss on a Himalayan debris-covered glacier: processes, patterns and rates. J. Glacio. 62, 467–485 (2016).
37. 37.
Miles, E. S. et al. Surface Pond Energy Absorption Across Four Himalayan Glaciers Accounts for 1/8 of Total Catchment Ice Loss. Geophys. Res. Lett. 45, 10464–10473 (2018).
38. 38.
Brun, F. et al. Ice cliff contribution to the tongue-wide ablation of Changri Nup Glacier, Nepal, central Himalaya. Cryosphere. 12, 3439–3457 (2018).
39. 39.
Nuth, C. & Kääb, A. Co-registration and bias corrections of satellite elevation data sets for quantifying glacier thickness change. Cryosphere. 5, 271–290 (2011).
40. 40.
RGI Consortium. Randolph Glacier Inventory – A Dataset of Global Glacier Outlines: Version 6.0: Technical Report, Global Land Ice Measurements from Space, Colorado, USA. Digital Media (2017).
41. 41.
Gardelle, J., Berthier, E. & Arnaud, Y. Impact of resolution and radar penetration on glacier elevation changes computed from DEM differencing. J. Glacio. 58, 419–422 (2012).
42. 42.
Vijay, S. & Braun, M. Elevation change rates of glaciers in the Lahaul-Spiti (Western Himalaya, India) during 2000–2012 and 2012–2013. Rem. Sens. 8 (2016).
43. 43.
Pieczonka, T. & Bolch, T. Region-wide glacier mass budgets and area changes for the Central Tien Shan between ~1975 and 1999 using Hexagon KH-9 imagery. Glob. Plan. Change. 128, 1–13 (2015).
44. 44.
Gardelle, J., Berthier, E., Arnaud, Y. & Kääb, A. Region-wide glacier mass balances over the Pamir-Karakoram-Himalaya during 1999–2011. Cryosphere 7, 1263–1286 (2013).
45. 45.
McNabb, R., Nuth, C., Kääb, A. & Girod, L. Sensitivity of glacier volume change estimation to DEM void interpolation. Cryosphere 13, 895–910 (2019).
46. 46.
Bolch, T., Pieczonka, T. & Benn, D. I. Multi-decadal mass loss of glaciers in the Everest area (Nepal Himalaya) derived from stereo imagery. Cryosphere 5, 349–358 (2011).
47. 47.
Huss, M. Density assumptions for converting geodetic glacier volume change to mass change. Cryosphere 7, 877–887 (2013).
48. 48.
Mukherjee, K., Bhattacharya, A., Pieczonka, T., Ghosh, S. & Bolch, T. Glacier mass budget and climatic reanalysis data indicate a climate shift around 2000 in Lahaul-Spiti, western Himalaya. Climatic Change. 148(1–2), 219–233 (2018).
49. 49.
Bjørk, A. A. et al. An aerial view of 80 years of climate-related glacier fluctuations in southeast Greenland. Nat. Geosci. 5, 427–432 (2012).
50. 50.
James, W. H. M. & Carrivick, J. L. Automated modelling of spatially-distributed glacier ice thickness and volume. Comp. & Geosci. 92, 90–103 (2016).
51. 51.
Hall, D. K., Bayr, K. J., Schöner, W., Bindschadler, R. A. & Chien, J. Y. Consideration of the errors inherent in mapping historical glacier positions in Austria from the ground and space (1893–2001). Remote Sens. Environ. 86, 566–577 (2003).
52. 52.
Bolch, T., Menounos, B., Wheate, R. Landsat-based glacier inventory of western Canada, 1985-2005. Remote Sens. of Environ. 114 (1), 127–137 (2010).
## Acknowledgements
This study was supported by the Swiss National Science Foundation (Grant No. IZLCZ2_169979/1), the Dragon 4 project funded by ESA (4000121469/17/I-NB) and the Strategic Priority Research Program of Chinese Academy of Sciences (XDA20100300). We thank Anders Anker Bjørk for providing the glacier length tool used to estimate frontal changes of land-terminating and lake-terminating glaciers. R.B thanks the Director of Wadia Institute of Himalayan Geology for the support of his work.
## Author information
Authors
### Contributions
T.B., O.K. and R.B. designed the study. O.K. generated the contemporary mass balance data, analysed all datasets and wrote the draft of the manuscript. A.B. generated the ~1974 to 2000 mass balance dataset. R.B. generated the glacier terminus retreat data. All authors contributed to the final form of the manuscript.
### Corresponding author
Correspondence to Owen King.
## Ethics declarations
### Competing interests
The authors declare no competing interests.
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
## Rights and permissions
Reprints and Permissions
King, O., Bhattacharya, A., Bhambri, R. et al. Glacial lakes exacerbate Himalayan glacier mass loss. Sci Rep 9, 18145 (2019). https://doi.org/10.1038/s41598-019-53733-x
• Accepted:
• Published:
• DOI: https://doi.org/10.1038/s41598-019-53733-x
• ### Health and sustainability of glaciers in High Mountain Asia
• Evan Miles
• Michael McCarthy
• Francesca Pellicciotti
Nature Communications (2021)
• ### Subglacial discharge controls seasonal variations in the thermal structure of a glacial lake in Patagonia
• Shin Sugiyama
• Masahiro Minowa
• Marius Schaefer
Nature Communications (2021)
• ### Accelerated mass loss of Himalayan glaciers since the Little Ice Age
• Ethan Lee
• Jonathan L. Carrivick
• Lee E. Brown
Scientific Reports (2021)
• ### High Mountain Asian glacier response to climate revealed by multi-temporal satellite observations since the 1960s
• Atanu Bhattacharya
• Tobias Bolch
• Tandong Yao
Nature Communications (2021) |
# How can pre-main sequence stars radiate more energy than main-sequence stars?
How can a pre-main sequence star radiate more energy by gravitational contraction than a main-sequence star can by hydrogen fusion?
-
## 1 Answer
Although pre-main sequence stars have lower temperature, they are essentially huge clouds of gas, often as large as 1 pc wide. The luminousity being proportional to square of the radius is essentially large for pre-main sequence stars.
Plus, the problem with gravitational contraction is not the amount of energy that can be generated per second. That can actually be larger than that produced from fusion if you insist on really massive objects. The logic, when we ruled out gravitational contraction as a source of energy for Sun, was that gravitational contraction cannot sustain it for long. Geologists knew from their dating science that Sun had to be older than grav. contraction accounts for.
Thus, admit it. Gravitational contraction makes pre-main sequence stars radiate more energy than main sequence starts, but that's just luminousity and not really energy. Luminousity is energy per second.
- |
# Solving integral equation using DifferentialEquations
### Problem
I need to solve an equation in x which can be simplified as an MWE into
x = \int_{x}^\infty A(z) dz
Importantly, A(z) is decreasing, integrable, and goes to 0 at infinity.
### Possible solution concept
I thought I would define
K(x) = \int_x^\infty A(z) dz - x
and evaluate eg K(0) using quadrature, then use an ODE solver on
K'(x) = -A(x) - 1
starting from eg 0 and see where it crosses 0.
I just don’t know how to implement this with the DifferentialEquations ecosystem. I looked at the integrator interface, but all I could think of is finding where I cross 0, then backtracking.
Other suggestions for solution methods are appreciated.
For an MWE, consider
A(z) = exp(-z)
for which the (numerically obtained) solution should satisfy z = e^{-z}.
Why not newton solver?
2 Likes
Sorry, I don’t know what that is. An example would be appreciated.
if you know how to compute the integral with quadrature, why do you want to use DiffEq? and chose a (non)linear solver?
Use Newton and a quadrature routine. That is, you are trying to find a root of
f(x) = x - \int_x^\infty A(z) dz
The derivative of this is f'(x) = 1 + A(x). So, you can just repeatedly take Newton steps
x \leftarrow x - \frac{f(x)}{f'(x)}
using QuadGK
f(x) = x - quadgk(A, x, Inf, rtol=1e-4)[1]
for example. (QuadGK handles the semi-infinite integral for you by a coordinate transformation.)
9 Likes
OK, I see that you and @stevengj suggest that I just use a univariate solver directly on K(x), and Newton should be a good choice since K'(x) is available at no cost. Got it, thanks!
Yes, for example:
function mysolve(A, x, rtol)
while true
f = x - quadgk(A, x, Inf, rtol=rtol)[1]
println("f($x) =$f")
Δx = -f / (1 + A(x)) # Newton step
x += Δx
abs(Δx) ≤ abs(x)*rtol && break
end
return x
end
converges to about 10 digits in only 4 steps for A(x) = e^{-x} / (x^2 + 1) starting at an initial guess of x=1:
julia> x = mysolve(t -> exp(-t) / (t^2 + 1), 1, 1e-4)
f(1) = 0.9033475191037287
f(0.23699872265724586) = -0.17704370803589597
f(0.3383383890915101) = -0.005483914249519217
f(0.3416828040022412) = -5.745324205275182e-6
0.34168631519454107
julia> x - quadgk(t -> exp(-t) / (t^2 + 1), x, Inf, rtol=1e-7)[1]
-4.3091752388590976e-11
5 Likes
Wouldn’t you have to use the full Leibniz integration rule here? I believe the correct derivative of K(x) should then be -2A(x) - 1.
Edit: Ah, never mind. The A inside the integral only depends on z, not x explicitly, so you were right. |
Math Help - find an expression for W
1. find an expression for W
$W=\int_{V_{1}}^{V_{2}}{3P}^{\frac{1}{2}}V^{2}\; dV$
If $PV\; =\; \mbox{C} (constant)$
Find an expression for W.
2. Originally Posted by wilko
$W=\int_{V_{1}}^{V_{2}}{3P}^{\frac{1}{2}}V^{2}\; dV$
If $PV\; =\; \mbox{C} (constant)$
Find an expression for W.
Substitute $P=\frac{C}{V}$ in the integral. |
Article | Open | Published:
Gene expression reversal toward pre-adult levels in the aging human brain and age-related loss of cellular identity
Abstract
It was previously reported that mRNA expression levels in the prefrontal cortex at old age start to resemble pre-adult levels. Such expression reversals could imply loss of cellular identity in the aging brain, and provide a link between aging-related molecular changes and functional decline. Here we analyzed 19 brain transcriptome age-series datasets, comprising 17 diverse brain regions, to investigate the ubiquity and functional properties of expression reversal in the human brain. Across all 19 datasets, 25 genes were consistently up-regulated during postnatal development and down-regulated in aging, displaying an “up-down” pattern that was significant as determined by random permutations. In addition, 113 biological processes, including neuronal and synaptic functions, were consistently associated with genes showing an up-down tendency among all datasets. Genes up-regulated during in vitro neuronal differentiation also displayed a tendency for up-down reversal, although at levels comparable to other genes. We argue that reversals may not represent aging-related neuronal loss. Instead, expression reversals may be associated with aging-related accumulation of stochastic effects that lead to loss of functional and structural identity in neurons.
Introduction
The human brain undergoes significant structural and functional changes during both prenatal and postnatal development1, 2. However, changes are not limited to the developmental period, but continue in adult individuals over their lifetime, termed brain aging3. Aging-related changes include loss of gray and white matter volume, increased inflammation, loss of dendritic spines, elevated axonal bouton turnover rates, and a general loss of synaptic plasticity, which is paralleled by declining cognitive function and elevated risk of neurodegenerative disease4,5,6,7. Nevertheless, the molecular mechanisms behind aging-related phenotypic changes are only barely understood.
One method to study molecular mechanisms of aging is transcriptome analysis, i.e. investigating how mRNA levels change with adult age. A prefrontal cortex transcriptome age-series analysis we had conducted earlier found that the majority of gene expression changes in aging (post-20 years of age) occurred in the opposite direction of gene expression changes in postnatal development (pre-20 years of age)8. More specifically, genes up- (or down-) regulated during postnatal development were down- (or up-) regulated in aging, displaying a “reversal” pattern. A similar observation was made in a subsequent, independent study of frontal cortex aging9, suggesting that reversal in brain aging may be a common trend.
Processes important for the establishment of functional neuronal networks, such as axonogenesis, myelination, synaptogenesis and synaptic maturation, are active during postnatal development10, 11 and involve specific gene expression changes in neurons8, 9, 12. These molecular and morphological changes are thought to help constitute the cellular identity of neurons, that is, the molecular and physiological characteristics that define a mature, differentiated cell. Aging-related expression reversal among genes involved in these processes may represent the loss of cellular identity and/or declining physiological activity in old neurons, and a possible link to functional cognitive decline with age.
The prevalence of gene expression reversal in aging has not yet been systematically tested across different brain regions or in different datasets. Nor has it been shown that reversal is associated with neuronal functions, as would be expected under the loss of cellular identity model. Here we use multiple datasets that measured transcriptome profiles across postnatal development and aging in diverse brain regions, to gain insight into the prevalence and biological essence of the reversal trend. We confirm the presence of a common gene set showing reversal in their expression profiles across different brain regions, in the form of developmental up-regulation followed by down-regulation in aging (“up-down”). We further study the functional associations of these expression patterns.
Results
In order to compare molecular changes in the brain before and after adulthood, we compiled data from three published human postmortem brain transcriptome age-series covering postnatal lifetime, produced by three different laboratories, each based on different microarray platforms. To our knowledge, these are the only brain transcriptome datasets that include samples distributed across the whole human postnatal lifetime. The 19 datasets comprise 17 brain regions and 1,017 samples in total, and individual ages span from 0 to 98 years of age (Table 1, Figure S1).
We divided each dataset in two parts by categorizing samples from individuals between 0–20 years of age as (postnatal) “development”, and those from individuals 20 years of age or older as “aging”. This division is based on the approximate age of first reproduction in diverse human societies13, and it also represents a general turning point in brain development and functionality, such as the initiation of certain aging-related cognitive decline trends6 (we confirm that using earlier or later turning points yields qualitatively similar results in downstream analyses; see Methods and Figure S13). Separating each dataset into development and aging yielded 38 subdatasets in total.
In downstream analyses, we model gene expression changes as linear processes (i.e. increase or decrease) in both periods, i.e. development and aging, separately. For this reason, we did not include prenatal brain samples in this study, as developmental changes can be discontinuous between pre- and postnatal periods at both histological10, 11 and molecular9, 12 levels.
Gene expression change during development and aging
To obtain an overview of the data, we measured age-related change for each gene using the Spearman correlation coefficient (rho) between individual age and expression levels, separately for each subdataset (14,356–22,714 genes). We then studied the consistency of these age-related change measures in development and in aging across all subdatasets. Specifically, we calculated the correlation of age-expression correlations among all pairs of subdataset combinations. Within each period, different datasets show positive correlation with each other, indicating parallel age-related changes (Fig. 1a). However, higher correlation is observed among developmental datasets (median rho = 0.52) than in aging (median rho = 0.40) (Wilcoxon signed rank test p < 10−15), a result that may be related to higher gene expression noise in aging than in development (see Discussion).
We next conducted principle components analysis on age-related change trends measured in subdatasets. We observed that ontologically related brain regions cluster more visibly in development than in aging (Fig. 1b). This result again implies higher stochasticity in age-related expression changes during aging than in development.
Notably, both analyses revealed that transcriptome-wide age-related change trends are distinct between development and aging (Fig. 1). In fact, correlations between age-related expression change measured in development and in aging were frequently negative, in line with reversal of gene expression during aging to pre-adult levels (55% of pairwise comparisons were negative, as shown with reddish tone in Fig. 1a; median rho = −0.02).
Shared age-related expression change across datasets
We next sought to identify shared age-related change patterns in development and in aging across the 19 datasets. For this, we first tested each gene for age-related expression change in each of the subdatasets separately, correcting for multiple testing. This revealed between 0–30% (median 2%) of genes showing significant change during development among datasets, and 0–9% (median 0%) of genes showing significant change during aging (Figure S2). Thus, in either development or in aging, no common genes can be detected with this approach, which likely reflects both biological and technical noise preventing age-related change from being reliably detected in a single dataset. We therefore resorted to an alternative approach, and asked if we could identify common development- or aging-related genes that show expression change in the same direction across all 19 datasets (Fig. 2a). Specifically, we categorized all genes as showing up- versus down-regulation trends with age, irrespective of effect size; we then identified the set of common genes showing the same trend in all datasets; and finally, we tested whether a set of such size can be expected randomly, using 1,000 structured permutations of individual labels in each dataset (i.e. randomizing age while keeping the individual identities fixed across all datasets from each data source; see Methods).
Using this approach, we found 1422 genes (13% of 11,258 genes expressed in all datasets) showing consistent expression change during development across the 19 datasets, while only 149 are expected by chance (one-sided permutation test p < 0.001; Figure S3a). For aging, we found 565 (5%) such consistent genes (expected = 156, one-sided p = 0.008; Figure S3b). Thus, although age-related change trends may be too weak to be detected in individual datasets, searching for consistency across datasets and testing the result against the null hypothesis of no shared age-related change can provide sufficient power to determine shared trends.
Microarrays do not bias against identifying aging-related expression change
Why is there a deficiency of significant aging-related expression changes, relative to those in development? This is not simply related to differences in statistical power, as all datasets comprise more adults than pre-adults (Figure S1). Another explanation pertains to technical artifacts such as the use of microarrays. Microarrays cannot measure absolute expression levels as accurately as RNA-sequencing (RNA-seq)14 and they only probe a pre-determined set of genes and isoforms. But because of a lack of well-powered RNA-seq studies that include samples representing the whole lifespan (see Methods), our study was restricted to three microarray experiments. Could microarray data be biased against identification of aging-related expression change? To address this we calculated expression-age correlations using an RNA-seq dataset including 13 different brain regions produced by the GTEx Consortium15 and compared these with expression-age correlations calculated using the microarray-based datasets. This showed no indication of clustering by platform (Figure S4). We also examined the possibility of a gene representation bias in microarray studies. Comparing expression-age correlations among all genes detected in an RNA-seq dataset with the ones detected by both RNA-seq and microarray datasets also revealed no indication that genes represented on the microarrays are biased towards higher or lower age-expression correlation with age (Figure S5). We conclude that use of microarray data is likely not the cause of the low numbers of aging-related genes identified.
Shared reversal trends across datasets
We next studied reversal, i.e. a gene’s expression levels changing in the opposite direction between development and aging. Because in most datasets no genes show significant expression change during aging, instead of limiting our analysis to the few individually significant genes, we used the advantage of the multiple datasets available and sought shared trends across all 19 datasets (irrespective of effect size of age-related change, as above). For this, we first classified all expressed genes into 4 categories, depending on their up- or down-regulation tendencies in development or in aging, in each dataset (Fig. 2b). This revealed ~50% (37–59%) of genes showing reversal trends across the 19 datasets (notably, a 50% ratio would already be expected by chance). Next, we used two approaches to test the statistical significance of the observed reversal trends.
In the first approach, we determined the set of genes showing shared reversal trends across all 19 datasets (again irrespective of effect size) and calculated the significance of this gene set using random permutations of individual labels in each dataset (see Methods). There were 87 genes showing consistent change across all datasets both in development and in aging. Among these, 35 showed consistent reversal (Fig. 2c and Table S1). Of these 25 showed an up-down pattern (up-regulation in development and down-regulation in aging) shared across all 19 datasets, which was significantly more than by chance (expected = 5, one-sided p = 0.031). Meanwhile, 10 showed a down-up pattern, which was non-significant (expected = 5, one-sided p = 0.246). We confirmed the up-down trends identified among the 25 genes in additional transcriptome datasets of brain development and aging (permutation test p < 0.0001; see Methods).
Shared functional processes enriched in reversal trends
In the second approach, we tested whether particular functional categories (not necessarily individual genes) show shared enrichment in reversal patterns across all the 19 datasets. Here we compared the reversal proportion among genes assigned to a Gene Ontology (GO) category with the reversal proportion among all other genes, thus calculating an odds ratio for reversal (see Methods and Figure S6a). Importantly, we kept the developmental trend constant, such that we compared the up-down (or down-up) pattern, with the up-up (or down-down) pattern. In total, we calculated reversal odds ratios for 13,392 GO Biological Process (BP) categories. We found 11 shared categories with more down-up than down-down genes across all 19 datasets (odds ratio >1), which involves categories related to differentiation and morphogenesis (Table S2); but the result was not significant in permutations of individual ages (p = 0.4). In contrast, there were 113 shared categories with more up-down than up-up genes in all datasets, more than expected by chance (expected = 11, one-sided p = 0.017, Figure S3c). Categories enriched in up-down genes vs. up-up genes were mainly involved in neuronal functions, synaptic functions, diverse macromolecule modification and localization, as well as signaling processes (Fig. 3 and Table S3). We also repeated the analysis after removal of duplicated GO BP categories and showed that the significance of shared categories was not driven by the presence of duplicated categories (Figure S7).
When we repeated the same analysis for GO Molecular Function (MF) and GO Cellular Component (CC) categories, we did not detect significantly shared categories for the down-up pattern across different datasets (Table S4 and Table S6), whereas categories enriched in up-down pattern were found to be significant (one-sided p = 0.037 for GO MF and p = 0.008 for GO CC). Shared up-down GO MF categories are mainly related to post-translational modifications (Table S5) and shared up-down GO CC categories involve mainly neuronal or synapse-related genes (Table S7).
Thus, both analyses showed that one reversal pattern, up-down, can be consistently detected in brain aging across diverse brain regions, and is significantly associated with specific functional categories, including certain neural functions. In the following analyses, we investigate the biological significance of up-down expression patterns to test: (1) whether up-down reversal may be driven by trans regulators, (2) whether up-down reversal represents neuronal loss, (3) whether up-down reversal involves genes with roles in neuronal differentiation, which would be compatible with the loss of cellular identity hypothesis, and (4) whether up-down reversal shows association with genes dysregulated in Alzheimer’s Disease.
Shared regulators of up-down reversal trends
We asked whether genes showing up-down patterns may be regulated by specific trans factors. To investigate this we tested whether up-down genes, compared to up-up genes, were enriched among targets of specific microRNAs (miRNAs) or transcription factors (TFs) (see Methods). Testing 1078 miRNAs and 211 TF binding sites separately, we found one miRNA and 4 TFs consistently enriched across the 19 datasets for the up-down pattern, relative to up-up; however, the results were non-significant and only marginally significant compared to permutations (one-sided p = 0.56 and p = 0.069), respectively.
Altered neuronal contribution during aging
Given that neuronal processes up-regulated during postnatal development show decreasing expression during aging across all brain regions, we first checked whether genes assigned to neuron-related categories overall show more up-down reversal compared to non-neuronal genes. A shared trend was observed which was marginally significant (Figure S8, one-sided p = 0.095), implying that up-down reversal is likely not specific to neuronal genes. We further asked whether neuronal expression might show overall reduction relative to those of other CNS cell types. To address this we used two more published datasets: a mouse brain cell type-specific microarray dataset17, and a human brain single cell RNA-sequencing dataset18. We then briefly studied the relative contributions of different cell types’ transcriptomes (including neurons, astrocytes, oligodendrocytes) to each sample in the age-series datasets, using a simple linear regression-based deconvolution approach (Methods). In all datasets expression levels in whole tissue predominantly reflect neuronal expression throughout the lifespan, as opposed to expression from other cell types (Figure S9). A subtle decrease in neuronal contribution was detectable across most datasets: Depending on the cell type specific dataset we used, 90–100% of datasets showed decreasing relative neuronal expression during aging (Figure S9; the median correlation coefficient between age and neuronal contribution across datasets ranged between −0.10 to −0.35). A parallel, consistent increase in astrocyte contributions could also be observed. We note, however, that because this analysis relies on expression profile comparisons from different platforms, the result cannot be directly interpreted as altered cell type proportions. Cell autonomous but systemic loss of gene expression in neurons can also lower neuronal contribution to the tissue level mRNA pool (see Discussion).
Up-down reversal and neuronal differentiation
To test a possible association between reversal in aging and loss of cellular identity, we compared reversal patterns with expression patterns related to differentiation. Our first hypothesis was that differentiation-related genes should show more up-down patterns than up-up patterns. For this, we used a human iPSC-derived neuronal differentiation dataset19. We determined that 476 to 651 genes showed up-regulation both during in vitro differentiation and during development across the 19 datasets (see Methods); these are ideal candidates for having role in postnatal neuronal identity establishment. We found that these genes are prone to show reversal, i.e. be down-regulated during aging, relative to up-regulation in aging: 16/19 datasets showed more up-down reversal than up-up patterns (Fig. 4), whereas no significant trend was observed in the opposite direction, significance being measured by permutations of individual ages in aging datasets. Still, the overall significance of finding 16/19 datasets was only marginal (one-sided p = 0.085). Second, we tested whether differentiation related genes show more up-down patterns than non-differentiation related genes by permuting sample stages in the iPSC-derived neuronal differentiation dataset, which was not significant (one-sided p = 0.144). Thus, genes related to differentiation and that are up-regulated during postnatal development are also inclined to be down-regulated during aging, in line with the notion of cellular identity loss. Meanwhile the up-down trend among differentiation-related genes is comparable to the rest of the transcriptome, not necessarily stronger.
Discussion
Aging, unlike development, is frequently considered not to be an evolutionarily programmed process that is adaptive per se, but a result of stochastic evolutionary and cellular/physiological events21. Likewise, aging-related molecular changes are supposed to be driven by accumulating stochastic events, affecting each individual, and possibly each cell, differently. As a result, studying aging phenotypes with limited sample sizes is challenging. This is especially so in humans, where the environment is uncontrolled.
Here we adopted a meta-analysis approach that, instead of seeking for significant signals in individual datasets, focuses on shared tendencies among multiple distinct datasets and different brain regions. Our approach will miss region-specific aging patterns. But at the same time, it has high sensitivity for shared expression change trends, because it includes trends too weak to pass significance thresholds in a single dataset. Our method is expected to minimize the influence of the confounding factors in individual datasets and thereby improve specificity and reproducibility. The approach can further reveal shared trends at the functional category level instead of the single gene level.
Our analysis identified shared and statistically significant expression change patterns in brain development and in brain aging, leading to a number of observations:
Noise in aging
We found conspicuously more shared expression change during development than in aging (Figs 1 and 2a), which was not a statistical power issue (Figure S1), nor a technical artifact (Figures S4 and S5). Rather, weaker expression changes in aging could occur because developmental expression changes are of higher magnitude (hence with higher signal/noise ratio) than those in aging8. Alternatively, aging-related expression changes may involve higher inter- and intra-individual variability than in development9, 22, 23, which could arise due to stochastic environmental or cellular effects. We also found that down-regulations were more prominent among shared aging-related changes than among development-related changes (Fig. 2a), again implying that aging-related expression changes may be particularly influenced by disruptive stochastic effects which may be expected to drive down-regulation more frequently than up-regulation. Thus, shared expression patterns across datasets hint at aging being subject to higher noise than development.
Prevalence of up-down patterns
We identified shared up-down reversal patterns across the 19 different datasets, involving 25 genes (shared down-up patterns were not significant). More importantly we found significant similarities among datasets at the pathway level: genes showing up-down trends were enriched among specific functional categories including neuron-related and synaptic processes. Genes activated in iPSC-derived neuronal differentiation and in postnatal development, which could include genes critical for neuronal function, also tended to be down-regulated during aging (even though at levels comparable to other gene sets). Combined, these results suggest that genes associated with neuronal and synaptic function, among others, may lose activity with advancing age in the brain, reminiscent of aging-related synaptic loss in the mammalian brain, a major culprit for aging-related decline in cognitive abilities24.
Why would critical neuronal genes be down-regulated during aging, rather than maintain their young adult expression levels? Here we consider three hypotheses: (a) extension of synaptic pruning, (b) cell type composition changes, and (c) cellular identity loss caused by damage and epigenetic mutations.
Extension of synaptic pruning
One possible culprit behind the up-down reversal pattern is synaptic pruning, a developmental process that initiates in postnatal development and may drive down-regulation of synaptic genes after childhood25. If synaptic pruning is indeed responsible for the observed up-down patterns, we expect up-regulation trends to be replaced by down-regulation already during childhood. In other words we would expect gene expression turning points arising before 20 years of age. Clustering all 25 shared up-down genes’ expression patterns and inspecting their turning points (i.e. maxima), we found that most clusters, although not all, had turning points around 20 years of age (Fig. 2c). Repeating this clustering analysis with 638 genes in all synapse-related categories (categories clustered as “Group 9” in Fig. 3), we found a range of turning points supported by multiple datasets (Figure S11). Specifically, while some gene clusters peak early in life (see Cluster 18 in Fig. 5), others clearly peak after 20 years of age (see Clusters 6 and 12 in Fig. 5). Overall, most up-down patterns do not arise before adulthood, arguing against the possibility that reversals represent the continuation of developmental processes that initiate during childhood.
Cell type composition change
The brain is a heterogeneous tissue comprised of different cell types, whereas all the analyzed datasets have been produced using whole tissue samples. This raises the question whether the up-down reversal pattern associated with neuronal processes represents cell autonomous gene expression changes, or changes in brain cell type proportions26.
In contrast to the established phenomenon of synaptic loss with age5, 27, histological evidence for aging-related cell type composition change, specifically neuronal loss, remains unclear. Multiple stereological studies of the primate cortex have reported no aging-related neuronal loss28, 29. Meanwhile, a fractionation experiment in the rat brain found ~30% decrease in neuron numbers between adolescence and old age30 (but the method remains to be applied to primate brain samples). Finally, a recent study using image analysis of NeuN-marked human brain sections reported loss of only large bodied neurons, representing 20% of the neuron population31. Our deconvolution analysis results are also equivocal (Figure S9): they could be compatible with modest neuronal loss during human brain aging, but also with cell autonomous loss of neuron-specific expression.
If neuronal loss was the main source of shared up-down patterns among brain regions, we may expect coordinated expression changes that affect multiple neuron-specific markers shared among brain regions. These would also be expected to be shared at the single gene level. However, the up-down reversal patterns are mainly shared at the functional process level, rather than at the gene level. This argues against a major role for cell type composition shifts driving up-down reversal. Nevertheless, shared neuronal loss remains a possibility if low signal/noise ratios could be blurring putative shared expression patterns. The question hence awaits to be addressed by future cell type-specific age-series datasets that cover the whole lifetime.
Loss of cellular identity
Finally, shared up-down expression patterns could also be driven by age-related cellular damage and genetic/epigenetic mutation accumulation. We can postulate two mechanisms: (a) the accumulation of stochastic mutations or epimutations directly disrupting normal neuronal function, or (b) regulated survival responses under accumulating insults32.
In this regard, it is interesting that we cannot definitely associate up-down reversal patterns with common regulators such as miRNA and TFs. If not a false negative result, this may suggest alternative regulatory factors (e.g. chromatin modifiers, RNA binding proteins, or yet unidentified TFs) driving up-down patterns. But it may also suggest that up-down reversal is driven by stochastic effects, such as accumulating DNA or protein damage or epigenetic mutations. If such stochastic effects can convergently disrupt the expression of a common, vulnerable set of neuronal genes across multiple cells and across different individuals, they could give rise to shared up-down reversal patterns. Future single cell RNA-sequencing and epigenomics age-series that include both development and aging may help illuminate the exact drivers of up-down reversal.
We find that genes prone to down-regulation in AD do not show exclusive up-down patterns in normal subjects. This limited overlap between AD and up-down reversal may be expected, as most expression down-regulation in AD is likely driven by acute processes such as neuronal apoptosis33, whereas up-down patterns in normal aging probably involve more subtle aging phenotypes, such as synaptic loss.
In summary, the up-down reversal phenomenon supports the notion that synaptic loss and cognitive decline observed in normal brain aging may be linked to gradual cellular identity loss, driven by accumulating stochastic intracellular events.
Methods
Development vs. aging
To identify and compare expression changes in postnatal development and in aging, we divided individuals in each dataset using 20 years of age as point of separation (or turning point). In human societies, this corresponds to the age at first reproduction13. Earlier transcriptome studies8, 9 have also suggested age of 20 as a global turning point in brain gene expression trajectories. Nevertheless, in order to assess prevalence of reversal throughout the lifespan, the full analysis was repeated with different ages used as turning points (see “different turning points” and Figure S13).
Age-related expression change
We used the Spearman correlation coefficient to assess age-related expression changes. P-values were corrected for multiple testing through the Benjamini-Yekutieli (BY) procedure54 using the “p.adjust” function in the base R library.
Permutation test
We used random permutations of individual labels to assess the probability of finding the same or higher number of shared observations (e.g. number of shared genes across all datasets showing the same expression change pattern, or the number of shared GO groups across all datasets with odds ratio >1), and to estimate the false discovery rate (Figure S6b). We designed the permutation procedure to account for non-independence among subdatasets caused by the presence of the same subjects within Kang2011 and within Somel2011. Specifically, in each permutation, the individuals’ ages were randomly permuted within each data source and period (i.e. individual labels are permuted within development or aging samples to simulate the null hypothesis of no age effect within that period), and in each permutation, the same individuals were assigned the same age across different brain regions. We thus simulated the null hypothesis of no change during aging across the transcriptome, while maintaining dependence among genes and dependence among subdatasets. For permutations, we used the “sample” function in R.
Gene expression clustering
In order to cluster genes according to their expression profiles across all datasets, the k-means algorithm (using the “kmeans” function in R) was used. We first standardized each gene’s expression level to mean = 0 and s.d. = 1. Directly combining scaled expression datasets and applying k-means would be misleading, because datasets have different number of samples. We therefore used standardized expression levels to calculate expression-age spline curves for each gene in each dataset (using the “smooth.spline” function in R) and interpolate at 20 equally distant age points within each dataset. Here, we used the fourth scale of age (in days), which provides a relatively uniform distribution of individual ages across lifespan8. We then combined the interpolated expression values for each dataset, and used these to run the k-means algorithm. Because the clustering analysis was conducted to study the diversity of turning points, we tried a range of cluster numbers and ensured that different choices of k yield the same conclusion with respect to the diversity of peak expression times (data not shown).
Functional analysis
We used Gene Ontology (GO)55 categories for functional analysis. “GO.db”56, “AnnotationDbi”57 and “org.Hs.eg.db”58 libraries in R were used in order to access the GO database and associated gene annotations (date of retrieval: March 26, 2016). In total; we used (1) 13,392 Biological Process, (2) 3,769 Molecular Function and (3) 1,601 Cellular Component categories containing (1) 15,754, (2) 15,805 and (3) 16,768 of the genes expressed in at least one dataset. We tested enrichment of the reversal pattern keeping the developmental change fixed. For instance, the down-up pattern was compared with down-down genes in each GO category, and an enrichment odds ratio (OR) was calculated for genes in that GO category compared to genes not in that GO category. Likewise, the up-down pattern was compared with the up-up pattern. Next, across all 19 datasets, we searched for consistent over-representation (i.e. OR >1 for all datasets) of reversal pattern for each GO BP category. The significance of sharing across datasets was tested using random permutations of individual ages (as described earlier). The schematic representation of the permutation test used for the functional analysis is given in Figure S6. To test the contribution of duplicated GO categories to detected shared significance, we repeated the same analysis after removal of such duplicated GO categories (9666 GO BP categories and 15,754 genes). To summarize the shared GO categories, we used the REVIGO16 algorithm, which clusters GO categories based on their semantic similarities, with the options similarity cutoff = 0.7, database = “Homo Sapiens”, semantic similarity measure = “SimRel”. The results were visualized in R using the “treemap” library59.
Regulatory analysis
The “biomaRt” library in R was used to access Ensembl and TarBase60 Databases, to retrieve miRNA-target gene associations. In total, 1,078 miRNA with 13,458 target genes were analyzed. For the transcription factor binding site (TFBS) determination (1) +/−2000 base pairs of the transcription start site for each gene was extracted using Ensembl annotations, (2) within these sequences, transcription factor binding sites were predicted using the TRANSFAC database and Match algorithm61, (3) for each TFBS, phastCon scores were calculated using UCSC Genome Browser 17-way vertebrate Conserved Element table62, and (4) conserved TFBS were defined if 80% or more nucleotides had defined phastCon score and if the average score was 0.6 or more. In total 211 TFBS with 16,594 associated genes were analyzed (data courtesy of Xiling Liu and Haiyang Hu). The over-representation analysis was conducted in the same way as for the functional analysis, keeping the developmental pattern fixed and searching for shared enrichment of the reversal pattern among targets of each miRNA/TF. The significance of the results was tested using random permutations of individual ages.
Differentiation-related genes
Neuronal differentiation-related genes were determined using data from a human iPSC-derived neuronal differentiation dataset19. We applied the Spearman correlation test for three stages of differentiation: iPSC, neurosphere and neuron, and genes showing significant correlation with differentiation stage after multiple test correction (q < 0.05) were considered differentiation-related. First, we tested consistency of expression change directions during neuronal differentiation and postnatal brain development, which was low (median = 48% for down- and median = 44% for up-regulated genes), which might be expected as many neuron development-related genes are down-regulated during the postnatal phase of brain development8. We then gauged the significance of the up-down reversal trend among genes up-regulated in neuronal differentiation. For this, we used the reversal proportion, calculated as the number of up-down genes/up-up genes, among genes up-regulated in differentiation (we thus control for possible parallels between differentiation and development). We then compared this observed proportion to those calculated from 1,000 random permutations of individual labels in the aging datasets. To calculate the overall significance across datasets, we compared the number of datasets with more than 50% reversal among genes up-regulated in differentiation with 1,000 random permutations (including all datasets). To test whether genes up-regulated in differentiation show more reversal than other genes, we used a permutation test, permuting the labels for differentiation stages. Thus, we simulated the null hypothesis that there is no effect of differentiation, maintaining the association among genes. Since in each permutation, the numbers of differentiation related genes that pass q < 0.05 will be very few, the reversal proportions calculated with these small numbers will vary greatly. Thus we cannot obtain a reliable null distribution for the reversal proportion. In order to maintain the number of differentiation-related genes the same with the real observation, instead of seeking for significant up-regulation in differentiation permutations, we first sorted the genes according to the effect sizes in permutations and then continued the downstream analysis with the most up-regulated N genes that are also up-regulated in postnatal development, N being the number of genes significantly up-regulated in real differentiation data and also up-regulated in postnatal development. N ranged between 476 and 651 in different age-series datasets.
Cell type-specific expression analysis
Two different cell type specific expression datasets17, 18 were used to analyze relative contribution of different cell types to the expression profile of the samples. Because we have only data available from different platforms, we could not apply sophisticated deconvolution algorithms, such as CIBERSORT63, and therefore analyzed the relative contribution of different cell types’ transcriptomes to each sample in the age-series datasets using a simple linear regression-based deconvolution approach. For both datasets, we first calculated the mean expression levels across the replicates of each main cell type: astrocytes (A), oligodendrocytes (Oli), myelinating oligodendrocytes (M_Oli), oligodendrocyte precursor cells (OPC), neurons (N), fetal quiescent (FQuies), fetal replicating (FRep), and endothelial (E). The relative contributions are represented by the regression coefficients calculated according to the following linear regression models:
$$\begin{array}{rcl}{{\rm Z}}_{sample} & = & \alpha +{\beta }_{A}{\zeta }_{A}+{\beta }_{Oli}{\zeta }_{Oli}+{\beta }_{M\_Oli}{\zeta }_{M\_Oli}+{\beta }_{OPC}{\zeta }_{OPC}+{\beta }_{N}{\zeta }_{N}+\varepsilon ,\\ & & \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad ({\rm{For}}\,{\rm{the}}\,{\rm{Cahoy}}2008\,{\rm{dataset}})\end{array}$$ $$\begin{array}{rcl}{{\rm Z}}_{sample} & = & \alpha +{\beta }_{A}{\zeta }_{A}+{\beta }_{Oli}{\zeta }_{Oli}+{\beta }_{M}{\zeta }_{M}+{\beta }_{FQuies}{\zeta }_{Fquies}+{\beta }_{FRep}{\zeta }_{FRep}\\ & & +{\beta }_{E}{\zeta }_{E}+{\beta }_{OPC}{\zeta }_{OPC}+{\beta }_{N}{\zeta }_{N}+\varepsilon ,\\ & & \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad ({\rm{For}}\,{\rm{the}}\,{\rm{Darmanis}}2015\,{\rm{dataset}})\end{array}$$
where Ζ represents the expression profile per gene, β’s represent the regression coefficients that estimate the relative contributions of each cell type, ζ’s represent expression level of each cell type (averaged across replicates), and ε represents residuals. The models were fit using the R “lm” function.
Different turning points
In order to assess the prevalence of reversals seen in other age groups, we repeated the analysis by taking 5, 10, 15, 30, and 40 years of age as turning points (Figure S13). We see that as the turning point increases the correlations between development datasets increase, whereas the opposite trend is observed for correlations between periods, i.e. correlation between development and aging. Similarly, we see that the consistency of gene expression changes in development increases as we increase the age of turning point, whereas the same trend is not observed in aging. The consistency starts to decrease after the age of 20. Consistent with this observation, the number of genes showing consistent reversal increases with increasing the age considered as turning point, except the age of 40. That is expected as the consistency of aging-related changes is low when the age of 40 is used as turning point. One important observation is that for all the ages used as turning points, number of consistent up-down genes is higher than that of down-up. Finally, we also wanted to see whether the functional associations of these patterns overlaps. We see that all turning points share the most GO BP groups with the age of 20 among up-down enriched groups. However, among down-up enriched functional groups, this is not observed; also the overall consistency is lower in down-up than in up-down.
Confirmation of the 25 up-down genes
For postnatal development we used a prefrontal cortex dataset produced using Affymetrix HG-U133P2 arrays, containing 28 human individuals below age 2064. About 1/2 of the individuals used in this study overlap with those used in Somel et al. 2010 and thus are not independent, but importantly, the data has been generated using Affymetrix 3′ arrays, which are significantly different from the Gene and Exon Arrays used in the main analyses. The dataset was downloaded as a pre-processed “series matrix file” from NCBI GEO36 with ID number GSE11512. Among 23 of the 25 genes represented in this dataset, all were up-regulated during postnatal development (Spearman rho ≥0.49). We confirmed the significance of this result by randomly choosing 23 genes from the dataset 10,000 times (p < 0.0001). For aging, we used the GTEx dataset produced by RNA-sequencing. For each of the 13 brain regions, we calculated the proportion of genes among the 25 shared up-down genes, among which expression levels were down-regulated. The median proportion among the 13 datasets was 0.96. Testing the significance of each result using randomization yielded a p-value of <0.0001 for each brain region, except for caudate basal ganglia, where we estimated p = 0.0518 (FDR corrected).
Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
References
1. 1.
Stiles, J. & Jernigan, T. L. The basics of brain development. Neuropsychol. Rev. 20, 327–348 (2010).
2. 2.
Jiang, X. & Nardelli, J. Cellular and molecular introduction to brain development. Neurobiol. Dis., doi:10.1016/j.nbd.2015.07.007 (2015).
3. 3.
Peters, A., Sethares, C. & Luebke, J. I. Synapses are lost during aging in the primate prefrontal cortex. Neuroscience 152, 970–981 (2008).
4. 4.
Grillo, F. W. et al. Increased axonal bouton dynamics in the aging mouse cortex. Proc. Natl. Acad. Sci. 110, E1514–E1523 (2013).
5. 5.
Sowell, E. R., Thompson, P. M. & Toga, A. W. Mapping changes in the human cortex throughout the span of life. Neuroscientist 10, 372–392 (2004).
6. 6.
Salthouse, T. A. When does age-related cognitive decline begin? Neurobiol. Aging 30, 507–514 (2009).
7. 7.
Dorszewska, J. Cell biology of normal brain aging: synaptic plasticity-cell death. Aging Clin. Exp. Res. 25, 25–34 (2013).
8. 8.
Somel, M. et al. MicroRNA, mRNA, and protein expression link development and aging in human and macaque brain. Genome Res. 20, 1207–1218 (2010).
9. 9.
Colantuoni, C. et al. Temporal dynamics and genetic control of transcription in the human prefrontal cortex. Nature 478, 519–523 (2011).
10. 10.
Huttenlocher, P. R. & Dabholkar, A. S. Regional differences in synaptogenesis in human cerebral cortex. J. Comp. Neurol. 387, 167–178 (1997).
11. 11.
de Graaf-Peters, V. B. & Hadders-Algra, M. Ontogeny of the human central nervous system: what is happening when? Early Hum. Dev. 82, 257–266 (2006).
12. 12.
Kang, H. J. et al. Spatio-temporal transcriptome of the human brain. Nature 478, 483–489 (2011).
13. 13.
Walker, R. et al. Growth rates and life histories in twenty-two small-scale societies. Am. J. Hum. Biol. 18, 295–311 (2006).
14. 14.
Fu, X. et al. Estimating accuracy of RNA-Seq and microarrays with proteomics. BMC Genomics 10, 161 (2009).
15. 15.
Ardlie, K. G. et al. The Genotype-Tissue Expression (GTEx) pilot analysis: Multitissue gene regulation in humans. Science (80-.). 348, 648–660 (2015).
16. 16.
Supek, F., Bošnjak, M., Škunca, N. & Šmuc, T. REVIGO summarizes and visualizes long lists of gene ontology terms. PLoS One 6, e21800 (2011).
17. 17.
Cahoy, J. D. et al. A transcriptome database for astrocytes, neurons, and oligodendrocytes: a new resource for understanding brain development and function. J. Neurosci. 28, 264–278 (2008).
18. 18.
Darmanis, S. et al. A survey of human brain transcriptome diversity at the single cell level. Proc. Natl. Acad. Sci 112, 201507125 (2015).
19. 19.
Paşca, S. P. et al. Using iPSC-derived neurons to uncover cellular phenotypes associated with Timothy syndrome. Nat. Med. 17, 1657–1662 (2011).
20. 20.
Yuan, Y., Chen, Y.-P. P., Boyd-Kirkup, J., Khaitovich, P. & Somel, M. Accelerated aging-related transcriptome changes in the female prefrontal cortex. Aging Cell 11, 894–901 (2012).
21. 21.
Kirkwood, T. B. L. Understanding the odd science of aging. Cell 120, 437–447 (2005).
22. 22.
Somel, M., Khaitovich, P., Bahn, S., Pääbo, S. & Lachmann, M. Gene expression becomes heterogeneous with age. Curr. Biol. 16, R359–60 (2006).
23. 23.
Bahar, R. et al. Increased cell-to-cell variation in gene expression in ageing mouse heart. Nature 441, 1011–1014 (2006).
24. 24.
Morrison, J. H. & Baxter, M. G. The ageing cortical synapse: hallmarks and implications for cognitive decline. Nat. Rev. Neurosci. 13, 240–250 (2012).
25. 25.
Liu, X. et al. Extension of cortical synaptic development distinguishes humans from chimpanzees and macaques. Genome Res. 22, 611–622 (2012).
26. 26.
Yu, Q. & He, Z. Comprehensive investigation of temporal and autism-associated cell type composition-dependent and independent gene expression changes in human brains. Scientific Reports. 7(1) (2017).
27. 27.
Nakamura, H., Kobayashi, S., Ohashi, Y. & Ando, S. Age-changes of brain synapses and synaptic plasticity in response to an enriched environment. J. Neurosci. Res. 56, 307–315 (1999).
28. 28.
Peters, A., Sethares, C. & Moss, M. B. The effects of aging on layer 1 in area 46 of prefrontal cortex in the rhesus monkey. Cereb. Cortex 8, 671–684 (1998).
29. 29.
Hof, P. R., Nimchinsky, E. A., Young, W. G. & Morrison, J. H. Numbers of Meynert and layer IVB cells in area V1: A stereologic analysis in young and aged macaque monkeys. J. Comp. Neurol. 420, 113–126 (2000).
30. 30.
Morterá, P. & Herculano-Houzel, S. Age-related neuronal loss in the rat brain starts at the end of adolescence. Front. Neuroanat 6, 1–9 (2012).
31. 31.
Soreq, L. et al. Major Shifts in Glial Regional Identity Are a Transcriptional Hallmark of Human Brain Aging. Cell Rep 18, 557–570 (2017).
32. 32.
Ledesma, M. D., Martin, M. G. & Dotti, C. G. Lipid changes in the aged brain: Effect on synaptic function and neuronal survival. Prog. Lipid Res. 51, 23–35 (2012).
33. 33.
Arendt, T., Brückner, M. K., Mosch, B. & Lösche, A. Selective cell death of hyperploid neurons in Alzheimer’s disease. Am. J. Pathol. 177, 15–20 (2010).
34. 34.
Mazin, P. et al. Widespread splicing changes in human brain development and aging. Mol. Syst. Biol. 9, 633 (2013).
35. 35.
Somel, M. et al. MicroRNA-driven developmental remodeling in the brain distinguishes humans from other primates. PLoS Biol. 9, e1001214 (2011).
36. 36.
Barrett, T. et al. NCBI GEO: Archive for functional genomics data sets - Update. Nucleic Acids Res 41, D991–D995 (2013).
37. 37.
Edgar, R. Gene Expression Omnibus: NCBI gene expression and hybridization array data repository. Nucleic Acids Res. 30, 207–210 (2002).
38. 38.
Carvalho, B. S. & Irizarry, R. A. A framework for oligonucleotide microarray preprocessing. Bioinformatics 26, 2363–2367 (2010).
39. 39.
Bolstad, B. M. preprocessCore: A collection of pre-processing functions. (2016).
40. 40.
Yates, A. et al. Ensembl 2016. Nucleic Acids Res. 44, D710–716 (2015).
41. 41.
Durinck, S. et al. BioMart and Bioconductor: a powerful link between biological databases and microarray data analysis. Bioinformatics 21, 3439–3440 (2005).
42. 42.
Durinck, S., Spellman, P. T., Birney, E. & Huber, W. Mapping identifiers for the integration of genomic datasets with the R/Bioconductor package biomaRt. Nat. Protoc. 4, 1184–1191 (2009).
43. 43.
Blalock, E. M. et al. Incipient Alzheimer’s disease: microarray correlation analyses reveal major transcriptional and tumor suppressor responses. Proc. Natl. Acad. Sci. USA. 101, 2173–2178 (2004).
44. 44.
Hokama, M. et al. Altered expression of diabetes-related genes in Alzheimer’s disease brains: the Hisayama study. Cereb. Cortex 24, 2476–2488 (2014).
45. 45.
Tan, M. G. et al. Genome wide profiling of altered gene expression in the neocortex of Alzheimer’s disease. J. Neurosci. Res. 88, 1157–1169 (2010).
46. 46.
Antonell, A. et al. A preliminary study of the whole-genome expression profile of sporadic and monogenic early-onset Alzheimer’s disease. Neurobiol. Aging 34, 1772–1778 (2013).
47. 47.
Miller, J. A., Woltjer, R. L., Goodenbour, J. M., Horvath, S. & Geschwind, D. H. Genes and pathways underlying regional and cell type changes in Alzheimer’s disease. Genome Med 5, 48 (2013).
48. 48.
Narayanan, M. et al. Common dysregulation network in the human prefrontal cortex underlies two neurodegenerative diseases. Mol. Syst. Biol. 10, 743 (2014).
49. 49.
Zhang, B. et al. Integrated systems approach identifies genetic nodes and networks in late-onset Alzheimer’s disease. Cell 153, 707–720 (2013).
50. 50.
Irizarry, R. A. Summaries of Affymetrix GeneChip probe level data. Nucleic Acids Res. 31, 15e–15 (2003).
51. 51.
Dobin, A. et al. STAR: ultrafast universal RNA-seq aligner. Bioinformatics 29, 15–21 (2013).
52. 52.
Harrow, J. et al. GENCODE: The reference human genome annotation for the ENCODE project. Genome Res. 22, 1760–1774 (2012).
53. 53.
Love, M. I., Huber, W. & Anders, S. Moderated estimation of fold change and dispersion for RNA-seq data with DESeq2. Genome Biol. 15, 550 (2014).
54. 54.
Benjamini, Y. & Yekutieli, D. The control of the false discovery rate in multiple testing under dependency. Ann. Stat 29, 1165–1188 (2001).
55. 55.
Ashburner, M. et al. Gene ontology: tool for the unification of biology. The Gene Ontology Consortium. Nat. Genet. 25, 25–29 (2000).
56. 56.
Carlson, M. GO.db: A set of annotation maps describing the entire Gene Ontology.
57. 57.
Pages, H., Carlson, M., Falcon, S. & Li, N. AnnotationDbi: Annotation Database Interface.
58. 58.
Carlson, M. org.Hs.eg.db: Genome wide annotation for Human.
59. 59.
Tennekes, M. treemap: Treemap Visualization. (2015).
60. 60.
Vlachos, I. S. et al. DIANA-TarBase v7.0: indexing more than half a million experimentally supported miRNA:mRNA interactions. Nucleic Acids Res. 43, D153–9 (2015).
61. 61.
Kel, A. E. MATCHTM: a tool for searching transcription factor binding sites in DNA sequences. Nucleic Acids Res 31, 3576–3579 (2003).
62. 62.
Siepel, A. et al. Evolutionarily conserved elements in vertebrate, insect, worm, and yeast genomes. Genome Res. 15, 1034–1050 (2005).
63. 63.
Newman, A. M. et al. Robust enumeration of cell subsets from tissue expression profiles. Nat. Methods 12, 1–10 (2015).
64. 64.
Somel, M. et al. Transcriptional neoteny in the human brain. Proc. Natl. Acad. Sci. USA. 106, 5743–5748 (2009).
Acknowledgements
We thank Ö. Gökçümen and W. Yuning for helpful suggestions on the manuscript, A. Aravena, M. Muyan, X. Liu, H. Hu, E. Yurtman and all members of the METU Comparative and Evolutionary Biology Group for discussions and support. H.M.D. was partially supported by TUBITAK 2210E scholarship; M.S. was partially supported by a Turkish Academy of Sciences GEBIP Award. The project was supported by The Scientific and Technological Research Council of Turkey TÜBİTAK (project no. 215Z053).
Author information
Author notes
• Handan Melike Dönertaş
Present address: European Molecular Biology Laboratory, European Bioinformatics Institute, Wellcome Trust Genome Campus, Hinxton, Cambridge, CB10 1SD, United Kingdom
Affiliations
1. Department of Biological Sciences, Middle East Technical University, 06800, Ankara, Turkey
• Handan Melike Dönertaş
• , Hamit İzgi
• & Mehmet Somel
2. Department of Molecular Biology and Genetics, Bilkent University, Ankara, Turkey
• Altuğ Kamacıoğlu
3. CAS Key Laboratory of Computational Biology, CAS-MPG Partner Institute for Computational Biology, 320 Yue Yang Road, Shanghai, 200031, China
• Zhisong He
• & Philipp Khaitovich
4. Max Planck Institute for Evolutionary Anthropology, Deutscher Platz 6, Leipzig, 04103, Germany
• Philipp Khaitovich
Contributions
M.S. and P.K. conceived the study. M.S. supervised the study. H.M.D. analyzed the data with contributions by H.İ., Z.H. and A.K. M.S. and H.M.D. wrote the manuscript. All authors reviewed the manuscript.
Competing Interests
The authors declare that they have no competing interests.
Corresponding authors
Correspondence to Handan Melike Dönertaş or Mehmet Somel. |
# Conduct a Wald test and construct a 95% Wald confidence. Are these sensible?
Question: In a crossover trial comparing a new drug to a standard, $\pi$ denotes the probability that the new one is judged better. it is desired to estimate $\pi$ and test $H_0:\pi=0.50$ against $H_a: \pi \neq 0.50$. In $20$ independent observations, the new drug is better each time.
Give the ML estimate of $\pi$. Conduct a Wald test and construct a 95% Wald confidence interval of $\pi$. Are these sensible?
I have this so far:
Ml: $\hat{\pi} = \frac{y}{n} \rightarrow \frac{20}{20}=1.$
The Wald test is $Z_w = \frac{\hat{\pi}-\pi_o}{\sqrt{\frac{(1-\hat{\pi})\hat{\pi}}{n}}}$ Inputting our values we just get
$\rightarrow Z_w = \frac{1-0.50}{\sqrt{\frac{(1-1)0.50}{20}}}=0$ For some reason the solution tells me that it goes to $\infty$, if someone could explain that.
The 95% Wald confide interval of $\pi$ is$\rightarrow \hat{\pi} \pm Z_\frac{\alpha}{2} \sqrt{\frac{\hat{\pi}(1-\hat{\pi})}{n}}$.
Which is just $1 \pm 1.96(0) \rightarrow (1,0) \& (1,0)$ Is this sensible?
It is good that you ask, "Is this sensible?" I think the purpose of this problem is to illustrate a difficulty with the traditional or Wald CI. It gives absurd one-point "intervals" as results when $\hat \pi$ is either 0 or 1. (BTW. your test statistic is infinite because the denominator has the factor $(1-\hat \pi) = (1 - 1) = 0.$)
Presumably you have studied or are about to study other kinds of CIs for the binomial proportion. If not, you can look up the 'Wilson' interval on the Internet; the Wikipedia page is pretty good. The 95% Wilson CI results from 'inverting the test', solving the inequality
$$-1.96 \le \frac{\hat \pi = \pi}{\sqrt{\pi(1-\pi)/n}} \le 1.96$$
to get an interval for $\pi.$ (This involves solving a quadratic equation, and a page of tedious algebra.) For your situation with $x = n = 20,$ the Wilson interval is approximately $(.8389,1.000).$
The Wilson interval is a little messy to compute, so Agresti and Coull have proposed an interval that is very nearly the same for 95% intervals. The idea is to append four imaginary observations to your data, two Successes and two Failures. Thus, you have $\tilde n = n+ 4$ and $\tilde \pi = (x + 2)/\tilde n.$ Then the 95% CI is of the form $$\tilde \pi \pm 1.96\sqrt{\frac{\tilde \pi(1 - \tilde \pi)}{\tilde n}}.$$ [The solution for the Wilson interval has some $2$s and some "small" terms with powers of $n$ in denominators. The Agresti interval conflates $1.96 \approx 2$ and ignores some "small" terms.] In the Agresti interval, $\tilde \pi > 0$ and $\tilde \pi < 1,$ so that nonsensical "one-point" CIs cannot occur. Perhaps more important, Brown, Cai, and DasGupta (2001) have shown that this Agresti or 'Plus-4' interval has actual coverage probabilities much nearer to the nominal 95% than the Wald intervals. The paper is readable, or you could look at this page.
Based on asymptotic results, Wald intervals are fine for very large $n.$ However, they involve two approximations that do not work well for small and moderate $n.$ (1) the normal approximation to the binomial and (2) the use of the approximate standard error $\sqrt{\hat \pi(1-\hat \pi)/n}$ instead of the exact standard error $\sqrt{\pi(1- \pi)/n}.$
Finally, a Bayesian probability interval (based on an non-informative prior distribution) is sometimes used as a CI when a computer package such as R is available to compute the endpoints. When there are $x$ successes in $n$ trials, the 95% interval uses quantiles .025 and .975 of the distribution $\mathsf{Beta}(x+1, n - x + 1).$ So for your example with $x = n = 20,$ the interval would be $(0.8389, 0.9988).$
x = n = 20; qbeta(c(.025,.975), x + 1, n - x + 1)
## 0.8389024 0.9987951 |
My Math Forum Cylinder Volume Problem
User Name Remember Me? Password
Geometry Geometry Math Forum
November 10th, 2016, 03:29 AM #1
Newbie
Joined: Nov 2016
From: Victoria
Posts: 3
Thanks: 0
Cylinder Volume Problem
Can someone please help me solve this problem? See attached. Thanks!
A rain gauge is cylindrical in shape and has a diameter at the opening of 100 mm. If 5 mm of rain falls, calculate the volume of the water collected in the rain gauge.
Attached Images
IMG_1931.jpg (95.8 KB, 8 views)
November 10th, 2016, 01:08 PM #2 Newbie Joined: Nov 2016 From: Victoria Posts: 3 Thanks: 0 Anyone please?
November 10th, 2016, 03:58 PM #3
Global Moderator
Joined: May 2007
Posts: 6,205
Thanks: 487
Quote:
Originally Posted by lucas111 Can someone please help me solve this problem? See attached. Thanks! A rain gauge is cylindrical in shape and has a diameter at the opening of 100 mm. If 5 mm of rain falls, calculate the volume of the water collected in the rain gauge.
Opening has area of $\displaystyle A=\pi 50^2$.
volume of water = 5A.
November 10th, 2016, 04:27 PM #4
Newbie
Joined: Nov 2016
From: Victoria
Posts: 3
Thanks: 0
Thanks! I think you got it right.
I just got the answer from someone else which was done differently and it matches your answer. Cool! Please see attached.
Attached Images
IMG_20161111_020110.jpg (81.9 KB, 0 views)
Tags cylinder, gauge, problem, rain, volume
Thread Tools Display Modes Linear Mode
Similar Threads Thread Thread Starter Forum Replies Last Post GIjoefan1976 Algebra 3 February 11th, 2016 08:43 PM zaidalyafey Calculus 1 October 13th, 2012 02:58 PM AFireInside Calculus 1 September 10th, 2011 05:30 PM kyurion Algebra 1 July 13th, 2010 03:04 PM grainne Elementary Math 2 January 22nd, 2010 03:20 PM
Contact - Home - Forums - Top
Copyright © 2017 My Math Forum. All rights reserved. |
# All-Purpose Sample Entropy
When given samples of a discrete random variable, the entropy of the distribution may be estimated by $- \sum \hat{P_i} \log{\hat{P_i}}$, where $\hat{P_i}$ is the sample estimate of the frequency of the $i$th value. (this is up to a constant determined by the base of the log.) This estimate should not be applied to observations from a continuous distribution, at least naively, because it would yield a value which depends only on the sample size.
Beirlant et al describe a number of approaches for the continuous problem, including estimates based on empirical CDF, nearest neighbor distances and the $m$-spacing estimate, which is given by $$\frac{1}{n}\sum_{i=1}^{n-m}\log{(\frac{n}{m}(X_{(i+m)} - X_{(i)}))}$$, where $X_{(i)}$ is the $i$th order statistic of the sample, and $m$ varies in a certain way with $n$. It is not clear how this estimate is to be computed in the presence of ties, i.e. it does not appear to be applicable to discrete distributions. (a naive correction for ties (drop terms which have $\log{0}$) appears to give an estimator which does not depend on the relative frequency of the classes, only their values, which seems wrong.)
The question: is there an 'all-purpose' estimator which can deal with both discrete and continuous (or even mixed) distributions?
• I'm not immediately sure how to solve this, but it seems reasonable that an estimate of the cdf can still be obtained. It might not be continuous, but I don't see that as a huge problem. Sep 15, 2010 at 0:11
• Like Robin says, definition of entropy depends on choice of measure. For discrete distributions it's clear-cut, but for continuous there's no agreement on the right measure to use, so you have a spectrum of different entropies. Perhaps you should ask why you need entropy in a first place, ie, what exactly do you expect to learn from it? Sep 15, 2010 at 17:06
• There are some significant differences between entropy of a discrete measure and differential entropy, as calculated for a measure with a density function. Estimating entropy is actually a difficult problem with lots of pretty strong negative results. See the work of Liam Paninski, for example. Researchers in neuroscience seem to take a strong interest in problems of this type. Aug 3, 2011 at 12:16
• One particular paper you might interested in is: L. Paninski, Estimation of entropy and mutual information, Neural Computation, vol. 15, 1191-1254. Aug 3, 2011 at 12:21
• @yaroslav - even for discrete distributions, definition of which counting measure to use is not clear cut. For example take an urn with $N$ balls, $W$ white and $B=N-W$ black. We can count "outcomes", which gives $Pr(W=w|N)=\frac{1}{N+1}$, or alternatively we can count individual balls, which gives $Pr(W=w|N)=2^{-N}{N \choose w}$ Oct 9, 2011 at 4:24
Entropy is entropy with respect to a measure
As noticed in the answer to this question https://mathoverflow.net/questions/33088/entropy-of-a-general-prob-measure/33090#33090 , entropy is only defined with respect to a given measure. For example discrete entropy is entropy with respect to counting measure.
Sample entropy should be an estimate of a predefined entropy.
I think the idea of sample entropy can be generalised to any type of entropy but you need to know which entropy you are trying to estimate before estimating it.
Example of entropy with respect to counting+lebesgues
For example, if you are trying to estimate the entropy with respect to the sum of the lebesgues measure on $[0,1]$ and the counting measure on $\{0,1\}$ a good estimate certainly (my intuition) is a sum of the two estimates you mention in your question (with $i=0,1$ in the first sum).
• very good. I'll sum the two and see how that works! Sep 16, 2010 at 4:45 |
# Free Massachusetts CDL Air Brakes Practice Test 2023
Do you need an Air Brakes endorsement or an L endorsement for your commercial driving license? The Massachusetts CDL Air Brake test has some differences from other endorsements because your license will receive a mark of restriction if you fail the test. So having good preparation before exam day is very necessary. To ensure that our questions are relevant, all of our CDL practice test packs are based on the MA CDL Manual. Each question has a detailed explanation for you to thoroughly learn the format and the topic. Don't be afraid of having a restriction on your license. Let’s try our Massachusetts CDL Practice Test to get ready to pass the Massachusetts CDL Air Brake Test now.
Our CDL practice tests:
Based on 2021 MA commercial driver's license manual
Perfect for first-time, renewal applicants
MA CDL Air Brakes Test format:
25 questions
80% passing score
List of questions
1.
To test air service brakes you should:
2.
The parking or emergency brake on a heavy vehicle can only be held in position by something that cannot leak away, like:
3.
What can legally hold a parking or emergency brake in position for a truck, truck tractor, or bus?
4.
A low air pressure warning signal is ________.
5.
The air brake lag distance at 55 mph on dry pavement adds about ____ feet.
6.
How far should manual slack adjusters move before they need to be adjusted?
7.
The most important thing to do when a low air pressure warning comes on is:
8.
To stop the vehicle, air brakes systems use:
9.
What negatively affects the braking power of a spring brake?
10.
When the safety valve releases air this indicates ___________.
11.
What happens when your brake drums get very hot?
12.
If the spring brakes are on, when should you push the brake pedal?
13.
The level at which the air compressor stops pumping air is:
14.
15.
To stop the vehicle, the brake shoes and linings are pushed against:
16.
You should avoid using the parking brake when _______.
17.
If you need to make an emergency stop, you should brake so that you can ________.
18.
In dual air systems, how long should it take air pressure to build 85 to 100 psi?
19.
Brake drums should not have cracks longer than what length?
20.
In a dual air brake system, if the air pressure in one part of the system drops enough to set off the low air warning:
21.
At minimum, a dual air system should build up to what psi in the primary and secondary systems?
22.
Vehicles without automatic air tank drains need to be checked manually to remove ________?
23.
The spring brakes of a tractor will come on when the air pressure falls within _______.
24.
A warning to drivers behind you that the air brakes have been applied is the:
25.
What does ABS stand for? |
# For `f(x)=1/(4x^4)-1/(3x^3)+2x` , use analytic methods-exact intervals: function increasing, decreasing, local extreme values. how do you factor y' for this question?
You should find first derivative of the function using quotient rule, such that:
`f'(x) = (1/4x^4 - 1/3x^3 + 2x)'`
`f'(x) = -16x^3/(16x^8) + 9x^2/(9x^6) + 2`
Reducing duplicate factors yields:
`f'(x) = -1/(x^5) + 1/(x^4) + 2 => f'(x) = (-1 + x + 2x^5)/x^5`
You need to find the roots of derivative such that:
`f'(x) = 0 => (-1 + x + 2x^5)/x^5 = 0 => -1 + x + 2x^5 = 0`
You need to move the constant term to the right side such that:
`x + 2x^5 = 1`
You need to solve the equation using graphical method, hence, notice that the graph of the equation `y = x+2x^5` and `y = 1` meet each other at `x in (0,1).`
Hence, the function has an extreme point at `x_0 in (0,1).`
You should notice that if `x = 0` , the derivative `f'(x)` is negative and if `x = 1` , the derivative is positive, hence the function decreases over `(-oo,x_0)` and it increases over `(x_0,oo), ` hence, the function reaches its local minimum at `x = x_0 in (0,1).`
Approved by eNotes Editorial Team |
Home Calling a function of a module by using its name (a string)
# Calling a function of a module by using its name (a string)
user4938
1#
user4938 Published in September 21, 2018, 8:05 am
What is the best way to go about calling a function given a string with the function's name in a Python program. For example, let's say that I have a module foo, and I have a string whose contents are "bar". What is the best way to go about calling foo.bar()?
I need to get the return value of the function, which is why I don't just use eval. I figured out how to do it by using eval to define a temp function that returns the result of that function call, but I'm hoping that there is a more elegant way to do this. |
Change the chapter
Question
Calculate the ratio of the heights to which water and mercury are raised by capillary action in the same glass tube.
$-2.78$
This ratio is negative since the height of mercury is negative. The height of the mercury within the glass tube is lower than the surrounding mercury outside of the tube.
Solution Video
OpenStax College Physics Solution, Chapter 11, Problem 66 (Problems & Exercises) (3:27)
Sign up to view this solution video!
Rating
No votes have been submitted yet.
Quiz Mode
Why is this button here? Quiz Mode is a chance to try solving the problem first on your own before viewing the solution. One of the following will probably happen:
1. You get the answer. Congratulations! It feels good! There might still be more to learn, and you might enjoy comparing your problem solving approach to the best practices demonstrated in the solution video.
2. You don't get the answer. This is OK! In fact it's awesome, despite the difficult feelings you might have about it. When you don't get the answer, your mind is ready for learning. Think about how much you really want the solution! Your mind will gobble it up when it sees it. Attempting the problem is like trying to assemble the pieces of a puzzle. If you don't get the answer, the gaps in the puzzle are questions that are ready and searching to be filled. This is an active process, where your mind is turned on - learning will happen!
If you wish to show the answer immediately without having to click "Reveal Answer", you may . Quiz Mode is disabled by default, but you can check the Enable Quiz Mode checkbox when editing your profile to re-enable it any time you want. College Physics Answers cares a lot about academic integrity. Quiz Mode is encouragement to use the solutions in a way that is most beneficial for your learning.
Calculator Screenshots
Video Transcript
This is College Physics Answers with Shaun Dychko. A glass tube is inserted into some water and then it's inserted separately into some mercury and the question here is what is the ratio of heights due to capillary action that the two fluids will reach? So we are taking a ratio of the water height in the glass tube versus the mercury height in the glass tube. So the height is 2 times the surface tension of the fluid, which is water in the first case here, times cosine of the contact angle between the fluid and the material of the tube so water and glass divided by the density of water times g times the radius of the tube. Now I didn't put a subscript w on the radius because that doesn't depend on the fluid— that's the same in both cases— it's the same tube and the same radius so there's no need to distinguish it with a subscript. And so the height for mercury is 2 times the surface tension of mercury times cos of the contact angle between mercury and glass divided by density of mercury times g times r. So we divide these two heights and it's confusing to divide a fraction by a fraction so instead I am going to multiply it by the reciprocal of the denominator so this height for mercury is going to be written here flipped over and multiplied. And so the r's cancel, the g's cancel and the 2's cancel and we are left with the ratio of heights is surface tension of water times cos of the contact angle times density of mercury and then divided by surface tension of mercury times cos of its contact angle with glass and then density of water. So we look up all these things in our data tables. So we have surface tension of water and we find that in table [11.3]— that's 0.0728 newtons per meter— multiplied by cos of the contact angle between water and glass— water-glass contact angle is 0 degrees— multiplied by 13.6 times 10 to the 3 kilograms per cubic meter density of mercury which we found in table [11.1]— mercury density is 13.6 times 10 to the 3 kilograms per cubic meter— and divide that by the surface tension of mercury which is very high, 0.465, compared to 0.0728 for water— and that is shown here... 0.465 newtons per meter— times cos of 140 degrees is the contact angle between mercury and glass and then multiply by the density of water— 1.000 times 10 to the 3 kilograms per cubic meter— this works out to negative 2.78. And the reason this is negative is because this height is negative; the mercury goes down compared to the fluid level outside the tube so if this is the tube and this is the fluid here, mercury would be lower than the surrounding fluid level. So this little height here is h Hg and it's negative because it's down. Okay and there we go! |
# #OctothorpeAsciiArt
An Octothorpe, (also called number sign, hash or hashtag, or pound sign) is the following ASCII character:
#
Isn't that a fun shape? Lets make bigger versions of it! So here is your challenge:
Given a positive integer N, output an ASCII hashtag of size N.
For example, an ASCII hashtag of size 1 looks like this:
# #
#####
# #
#####
# #
Trailing whitespace on each line is allowed, but not required.
The input will always be a valid positive integer, so you don't have to handle non-numbers, negative, or 0. Your output can be in any reasonable format, so outputting to STDOUT, returning a list of strings, or string with newlines, a 2D matrix of characters, writing to a file, etc. are all fine.
## Test cases
2:
## ##
## ##
##########
##########
## ##
## ##
##########
##########
## ##
## ##
3:
### ###
### ###
### ###
###############
###############
###############
### ###
### ###
### ###
###############
###############
###############
### ###
### ###
### ###
4:
#### ####
#### ####
#### ####
#### ####
####################
####################
####################
####################
#### ####
#### ####
#### ####
#### ####
####################
####################
####################
####################
#### ####
#### ####
#### ####
#### ####
5:
##### #####
##### #####
##### #####
##### #####
##### #####
#########################
#########################
#########################
#########################
#########################
##### #####
##### #####
##### #####
##### #####
##### #####
#########################
#########################
#########################
#########################
#########################
##### #####
##### #####
##### #####
##### #####
##### #####
Since this is a code-golf, try to write the shortest possible solution you can, and above all else, have fun!
• Related – pppery Aug 17 '17 at 21:38
# MATL, 201612 11 bytes
3 bytes thanks to DJMcMayhem.
1 byte thanks to Luis Mendo.
21BwY"&*~Zc
Try it online!
## Explanation
% stack starts with input e.g. 2
21 % push 21 to stack 2 21
B % convert to binary 2 [1 0 1 0 1]
w % swap [1 0 1 0 1] 2
Y" % repeat [1 1 0 0 1 1 0 0 1 1]
&* % one-input multiplication [[1 1 0 0 1 1 0 0 1 1]
[1 1 0 0 1 1 0 0 1 1]
[0 0 0 0 0 0 0 0 0 0]
[0 0 0 0 0 0 0 0 0 0]
[1 1 0 0 1 1 0 0 1 1]
[1 1 0 0 1 1 0 0 1 1]
[0 0 0 0 0 0 0 0 0 0]
[0 0 0 0 0 0 0 0 0 0]
[1 1 0 0 1 1 0 0 1 1]
[1 1 0 0 1 1 0 0 1 1]]
~ % complement [[0 0 1 1 0 0 1 1 0 0]
[0 0 1 1 0 0 1 1 0 0]
[1 1 1 1 1 1 1 1 1 1]
[1 1 1 1 1 1 1 1 1 1]
[0 0 1 1 0 0 1 1 0 0]
[0 0 1 1 0 0 1 1 0 0]
[1 1 1 1 1 1 1 1 1 1]
[1 1 1 1 1 1 1 1 1 1]
[0 0 1 1 0 0 1 1 0 0]
[0 0 1 1 0 0 1 1 0 0]]
Zc % convert 0 to spaces ## ##
1 to octothorpes ## ##
and join by newline ##########
##########
## ##
## ##
##########
##########
## ##
## ##
• You can use Zc instead of 35*c and ~ (logical NOT) instead of 0= – James Aug 17 '17 at 20:12
• @DJMcMayhem @_@ why is that a built-in – Leaky Nun Aug 17 '17 at 20:13
• Actually, the reason that's a builtin is really interesting. I could be wrong, but I think conor suggested it, and Suever wrote a script that looks at all the MATL answers to see what functions are more common for future improvements. Zc was just added – James Aug 17 '17 at 20:15
• Also, since each cell just has to be non-zero, you could do Q instead of 2< – James Aug 17 '17 at 20:18
• @LeakyNun You can change !t* to &*. The latter means "one-input multiplication", which multiplies (element-wise) the input by its transpose – Luis Mendo Aug 18 '17 at 1:50
# Brain-Flak, 420 bytes
(()()()){({}<(({})){({}<<>(<>({})<>){({}<((((()()()()){}){}()){}())>[(
)])}{}(<>({})<>){({}<((((()()()()){}){}){})>[()])}{}(<>({})<>){({}<(((
(()()()()){}){}()){}())>[()])}{}(<>({})<>){({}<((((()()()()){}){}){})>
[()])}{}((()()()()()){})<>>[()])}{}((({}))<(({})(({}){}){})>){({}<<>(<
>({})<>){({}<((((()()()()){}){}()){}())>[()])}{}((()()()()()){})<>>[()
])}{}{}>[()])}{}({}<>)(({})((({({})({}[()])}{})){}){}{}){({}<{}>[()])}
Try it online!
No, the score of 420 was not intentional. I promise. Readable version:
# 3 Times...
(()()())
{
({}<
#Duplicate the input
(({}))
#Input times...
{
({}<
#Switch to the main stack
<>
#Grab the duplicate of the input
(<>({})<>)
#That many times...
{({}<
# Push a hash
((((()()()()){}){}()){}())
>[()])}{}
#Grab the duplicate of the input
(<>({})<>)
#That many times...
{({}<
#Push a space
((((()()()()){}){}){})
>[()])}{}
#Grab the duplicate of the input
(<>({})<>)
#That many times...
{({}<
# Push a hash
((((()()()()){}){}()){}())
>[()])}{}
#Grab the duplicate of the input
(<>({})<>)
#That many times...
{({}<
#Push a space
((((()()()()){}){}){})
>[()])}{}
#Push a newline
((()()()()()){})
#Toggle back to the alternate stack
<>
#Decrement the (second) loop counter
>[()])
#Endwhile
}
#Pop the now zeroed loop counter
{}
#Turn [a] into [a, a*5, a]
((({}))<(({})(({}){}){})>)
#A times....
{
({}<
#Toggle back over
<>
#Grab a*5
(<>({})<>)
#That many times...
{({}<
#Push a space
((((()()()()){}){}()){}())
>[()])}{}
#Push a newline
((()()()()()){})
#Toggle back
<>
#Decrement the (second) loop counter
>[()])
}
#Pop the loop counter and the a*5
{}{}
#Decrement the outer loop counter
>[()])
}
#Pop the zeroed loop counter
{}
#Pop a over
({}<>)
#Pushes (a**2) * 5 + a
(({})((({({})({}[()])}{})){}){}{})
#That many times...
{({}<
#Pop a character off the output stack
{}
>[()])}
# 6502 machine code (C64), 59 56 bytes
00 C0 20 9B B7 A9 06 85 FC 86 FE A6 FE 86 FD A9 03 4D 1F C0 8D 1F C0 C6 FC D0
01 60 A9 23 A0 05 49 00 20 D2 FF CA D0 FA A6 FE 88 D0 F3 A9 0D 20 D2 FF C6 FD
D0 E6 F0 D3
### Online demo
Usage: SYS49152,N where N is a number between 1 and 255.
(values greater than 4 will already be too large for the C64 screen, starting from 8, the output is even too wide)
Explanation:
00 C0 .WORD $C000 ; load address .C:c000 20 9B B7 JSR$B79B ; read N into X
.C:c003 A9 06 LDA #$06 ; number of "logical" lines plus 1 for hash .C:c005 85 FC STA$FC ; store in counter variable for lines
.C:c007 86 FE STX $FE ; store N in counter variable for char repetitions .C:c009 A6 FE LDX$FE ; load repetition counter
.C:c00b 86 FD STX $FD ; store in counter variable for line repetitions .C:c00d A9 03 LDA #$03 ; value to toggle the character toggle
.C:c00f 4D 1F C0 EOR $C01F ; xor character bit toggle .C:c012 8D 1F C0 STA$C01F ; store character bit toggle
.C:c015 C6 FC DEC $FC ; decrement "logical" lines .C:c017 D0 01 BNE$C01A ; not 0 -> continue
.C:c019 60 RTS ; program done
.C:c01a A9 23 LDA #$23 ; load hash character .C:c01c A0 05 LDY #$05 ; load "logical" columns for hash
.C:c01e 49 00 EOR #$00 ; in each odd "logical" line, toggle character .C:c020 20 D2 FF JSR$FFD2 ; output one character
.C:c023 CA DEX ; decrement character repetition
.C:c024 D0 FA BNE $C020 ; not 0 -> back to output .C:c026 A6 FE LDX$FE ; reload character repetition
.C:c028 88 DEY ; decrement "logical" columns
.C:c029 D0 F3 BNE $C01E ; not 0 -> back to character toggle .C:c02b A9 0D LDA #$0D ; line done, load newline character
.C:c02d 20 D2 FF JSR $FFD2 ; and output .C:c030 C6 FD DEC$FD ; decrement line repetitions
.C:c032 D0 E6 BNE $C01A ; not 0 -> back to character init .C:c034 F0 D3 BEQ$C009 ; else back to main loop (toggle char toggling)
• +1 for nostalgia (6502 assembly on a c64 was my first programming experience...) – Olivier Dulac Aug 18 '17 at 14:01
l%n=l!!1:l++l<*[1..n]
f n=["##"%n,"# "%n]%n
Try it online!
Outputs a list of strings.
# Python 2, 55 bytes
def f(n):p=[(" "*n+"#"*n)*2]*n;print(p+["#"*n*5]*n)*2+p
Try it online!
This returns a 2D list of characters.
# Python 2, 65 bytes
def f(n):p=((" "*n+"#"*n)*2+"\n")*n;print(p+("#"*n*5+"\n")*n)*2+p
Try it online!
# Python 2, 66 bytes
def f(n):p=[(" "*n+"#"*n)*2]*n;print'\n'.join((p+["#"*n*5]*n)*2+p)
Try it online!
• Wat witchkraft is yer footer – Leaky Nun Aug 17 '17 at 20:25
• @LeakyNun A for loop :) – Mr. Xcoder Aug 17 '17 at 20:25
• No, I'm talking about the f(i); storing the result in a temp and print accessing it. – Leaky Nun Aug 17 '17 at 20:26
• @LeakyNun Ya misunderstood: f(i) prints and print in Python 2 adds a newline :P – Mr. Xcoder Aug 17 '17 at 20:27
• Oh, how stupid of me. – Leaky Nun Aug 17 '17 at 20:30
# Charcoal, 21 bytes
NθUOײθ#UOθ F²⟲OO²⁴⁶θ
Try it online! Link is to verbose version of code. I'd originally tried a cute bitmap approach:
F⁵F⁵F&|ικ¹«J×ιIθ×κIθUOθ#
Try it online! Link is to verbose version of code. Explanation: Works by considering the # as an array of 5×5 squares. The squares that are in odd rows or columns need to be filled in.
• does charcoal really not have a hashtag shape built-in? – dzaima Aug 17 '17 at 20:58
• Did I tie charcoal O_O? – Magic Octopus Urn Aug 17 '17 at 21:06
• yay (hmm looks like I need to fix that a bit) – ASCII-only Aug 17 '17 at 21:23
• @ASCII-only What needs fixing? – Neil Aug 17 '17 at 21:32
• Oblong shouldn't be printing the steps for the polygon it uses internally lol – ASCII-only Aug 17 '17 at 21:33
# J, 22 bytes
#('# '{~#:5$21,0)#~"1] Try it online! A lot of similarity to the other J answer, though I don't understand trains with lots of nouns well, so my answer has three potential bytes to cut off (two parens and a reflexive-~). # Explanation ### Generating the octothorpe The octothorpe is made by everything in the parenthetical, reproduced below for convenience. '# '{~#:5$21,0
A lot of the way I make the octothorpe is abuse of the way that J pads its arrays when they aren't long enough.
21,0 simply creates the array 21 0.
# CJam, 2726 25 bytes
{_[{S3*'#*'#5*}3*;]fe*e*}
Try it online!
Fun fact: This originally started at 29 bytes, and bytes have been removed one-by-one ever since, alternating between block and full-program mode.
## Explanation:
{ e# Stack: | 2
_ e# Duplicate: | 2 2
[ e# Begin array: | 2 2 [
{ e# Do the following 3 times:
S e# Push a space | 2 2 [" "
3* e# Repeat it 3 times: | 2 2 [" "
'#* e# Join with '#': | 2 2 [" # # "
'# e# Push '#': | 2 2 [" # # " '#
5* e# Repeat it 5 times: | 2 2 [" # # " "#####"
}3* e# End: | 2 2 [" # # " "#####" " # # " "#####" " # # " "#####"
; e# Delete top of stack: | 2 2 [" # # " "#####" " # # " "#####" " # # "
] e# End array: | 2 2 [" # # " "#####" " # # " "#####" " # # "]
fe* e# Repeat characters: | 2 [" ## ## " "##########" " ## ## " "##########" " ## ## "]
e* e# Repeat strings: | [" ## ## " " ## ## " "##########" "##########" " ## ## " " ## ## " "##########" "##########" " ## ## " " ## ## "]
} e# End
e# Result:
e# [" ## ## "
e# " ## ## "
e# "##########"
e# "##########"
e# " ## ## "
e# " ## ## "
e# "##########"
e# "##########"
e# " ## ## "
e# " ## ## "]
• Someone was prepared for this challenge :P – ETHproductions Aug 17 '17 at 20:10
• @ETHproductions It was a CMC and moved to main... – Esolanging Fruit Aug 17 '17 at 20:10
• @ETHproductions Can't really blame him for that... – Leaky Nun Aug 17 '17 at 20:12
## Husk, 12 10 bytes
´Ṫ▲Ṙ" # #
Try it online! Note the trailing space.
## Explanation
´Ṫ▲Ṙ" # # Implicit input, e.g. n=2.
Ṙ" # # Repeat each character of the string n times: " ## ## "
´Ṫ Outer product with itself by
▲ maximum: [" ## ## "," ## ## ","##########","##########"," ## ## "," ## ## ","##########","##########"," ## ## "," ## ## "]
Print implicitly, separated by newlines.
# J, 23 19 bytes
' #'{~1=]+./~@#i:@2
Saved 4 bytes thanks to @LeakyNun.
Try it online!
## Explanation
' #'{~1=]+./~@#i:@2 Input: integer n
2 The constant 2
i:@ Range [-2, -1, 0, 1, 2]
] Get n
# Copy each n times
+./~@ GCD table
1= Equals 1, forms the hashtag for input 1
' #'{~ Index and select the char
• Rats! Was just about to post a (4 byte longer) solution of my own. I'm really impressed by how you're able to compose these functions without caps and with few conjunctions. – cole Aug 17 '17 at 20:57
• @cole Thanks. Sometimes caps can be avoided by using a noun and dyad. For example, [:|:f could be 0|:f – miles Aug 17 '17 at 21:14
• ' # '{~]#"1]#+./~@i:@2 saves a byte – Conor O'Brien Aug 18 '17 at 0:45
• repeat before multiplication gives you 19 bytes: f=:' #'{~1=]+./~@#i:@2 – Leaky Nun Aug 18 '17 at 3:57
• @hoosierEE It's a new feature coming in J 8.06. You can try the beta jsoftware.com/download/j806/install – miles Aug 18 '17 at 22:55
# Jelly, 1413 11 bytes
Saved 2 bytes thanks to @JonathanAllen
5ẋ€Ẏ&þị⁾ #
A monadic link returning a list of lines. Note the trailing space.
Try it online!
### How it works
5ẋ€Ẏ&þị⁾ # Main link. Arguments: n (integer) 1
5 Yield 5.
ẋ€ Create a range and repeat each item n times. [[1], [2], [3], [4], [5]]
Ẏ Tighten; dump all sublists into the main list.
[1, 2, 3, 4, 5]
þ Create a table of [[1, 0, 1, 0, 1],
& bitwise ANDs, [0, 2, 2, 0, 0],
reusing this list. [1, 2, 3, 0, 1],
[0, 0, 0, 4, 4],
[1, 0, 1, 4, 5]]
ị⁾ # Index into the string " #". [" # # ",
0 -> "#", 1 -> " ", 2 -> "#", etc. "#####",
" # # ",
"#####",
" # # "]
• Nice observation regarding bitwise or - save two bytes by switching from or to and - removing the need to lower, allowing an implicit range and removing the need for µ (or the ⁸ you could have had there instead)... 5ẋ€Ẏ&þị⁾ # – Jonathan Allan Aug 17 '17 at 23:04
• @JonathanAllan Interesting--why does 5Ḷẋ€ require the µ, but not 5ẋ€? – ETHproductions Aug 17 '17 at 23:26
• I thought the need was just to stop Ẏ acting on n and then passing it to the right of ẋ€, since with a nilad-dyad leading chain being called monadically it's not necessary. I'm not quite sure, however, how seems to then place 5 (or maybe the list of that length) on the right of the tabled & though. – Jonathan Allan Aug 18 '17 at 1:04
# Game Maker Language, 138 108 bytes
n=argument0 s=''for(j=0;j<5*n;j+=1){for(l=0;l<5*n;l+=1)if(j div n|l div n)&1s+='#'else s+=' 's+='
'}return s
Intended as a script (Game Maker's name for user-defined functions), thus the n=argument0 and return s. 20 bytes could be shaved by taking n directly from the current instance and using s as the result. (The instance gets these variables anyway because they weren't declared with var).
Beware of course that # is used by Game Maker's graphics stuff as an alternative newline character, so you might want to prefix it with \ if you want to output to the screen ;)
Also note that I'm using Game Maker 8.0's version of GML here; modern GML versions might have features that could save additional bytes.
Some ideas courtesy of friends wareya and chordbug.
• I think this is the first GML answer i've ever seen – Timothy Groote Aug 18 '17 at 14:32
• @TimothyGroote It's a shame it isn't used more, its optional brackets and semicolons are great for golfing :) – Andrea Aug 18 '17 at 14:33
# Perl 5, 49 + 1 (-p) = 50 bytes
$_=' # # '=~s/./$&x$_/gre x$_;$_.=(y/ /#/r.$_)x2
Try it online!
How?
Implicitly store the input in $_ via the -p flag. Start with the most basic possible top line " # # " with its trailing newline. Replicate each of those characters by the input number. Then replicate that by the input number to form the top part of the octothorpe, storing all of that back in$. Then append the line with all characters replaced by '#' times the input number. Then append the top section. Do those last two sentences a total of two times. Output of the $is implicit in the -p flag. • I like how your answer is just as readable as mine. – AdmBorkBork Aug 17 '17 at 20:31 • They've always said that Perl is a write-only language. – Xcali Aug 17 '17 at 20:41 # 05AB1E, 2522 21 bytes •LQ•bûε×}5ôεS„# èJ¹F= Try it online! -1 because Emigna hates transliterate and, thankfully, reminds me I should too :P. Gotta be a better way than bitmapping it... Still working. • Reflection... is not the answer in 05AB1E, though it seems like it could be... – Magic Octopus Urn Aug 17 '17 at 20:53 • 5ôεS„# èJ¹F= saves a byte. – Emigna Aug 17 '17 at 20:56 • @Emigna would canvas be good for this? – Magic Octopus Urn Aug 17 '17 at 20:59 • Possibly. I haven't tried the canvas yet so I'm not really sure of its capabilities. Seems like something it's made for. – Emigna Aug 17 '17 at 21:00 ## JavaScript (ES6), 79 bytes f= n=>[...Array(n*5)].map((_,i,a)=>a.map((_,j)=> #[(i/n|j/n)&1]).join).join <input type=number oninput=o.textContent=f(this.value)><pre id=o> Port of the bitmap approach that I'd used for my original Charcoal attempt. # Python 2, 124, 116, 113, 112, 98, 96 66 bytes New (Credit: HyperNeutrino): def f(a):i='print(" "*a+"#"*a)*2;'*a;exec(i+'print"#"*a*5;'*a)*2+i Old: a=input();b,c="# " for i in"012": exec'print c*a+b*a+c*a+b*a;'*a if i<"2":exec'print b*a*5;'*a Try it online! Obviously not the shortest solution, but I think it's decent. Any feedback would be appreciated! • a,b,c=input()," #" should save some bytes. – James Aug 17 '17 at 23:33 • @DJMcMayhem That gave me an error. Did you mean a,b,c=input(),"#"," "? Which isn't any shorter...I appreciate the help! – Braeden Smith Aug 17 '17 at 23:41 • Oh, sorry. I assumed that worked because a,b="# " works. – James Aug 17 '17 at 23:44 • a=input();b,c="# " will work and save bytes – Ad Hoc Garf Hunter Aug 17 '17 at 23:44 • You can also get get rid of the parens in (i==2) and add a space to the beginning. – Ad Hoc Garf Hunter Aug 17 '17 at 23:47 # Brain-Flak, 338 332 bytes 6 bytes thanks to Riley. (({}<>)<(())>)(()()()()()){({}<(<>)<>{({}<<>({}<(((((()()()()())){})){}{}{})<>([({})]()){(<{}({}<((((({}))){}){}{}){({}<<>(({}))<>>[()])}{}>)>)}{}(({})<{{}(<(()()()()()){({}<<>(<([{}](((((()()){}){}){}){}()){}())>)<>{({}<<>({}<(({}))>())<>>[()])}<>({}<>{})>[()])}{}>)}>{})<>((()()()()()){})>())<>>[()])}<>({}<>{}<([{}]())>)>[()])}<> Try it online! ## More "readable" version (({}<>)<(())>)(()()()()()) {({}<(<>)<>{({}<<>({}<(((((()()()()())){})){}{}{})<> ([({})]()){(<{}({}< ((((({}))){}){}{}){({}<<>(({}))<>>[()])}{} >)>)}{}(({})<{{}(< (()()()()()){({}<<>(<([{}](((((()()){}){}){}){}()){}())>)<>{({}<<>({}<(({}))>())<>>[()])}<>({}<>{})>[()])}{} >)}>{}) <>((()()()()()){})>())<>>[()])}<>({}<>{}<([{}]())>)>[()])}<> Try it online! • (({})<>)(())<>({}<>) at the beginning can be replaced with (({}<>)<(())>) – Riley Aug 18 '17 at 13:46 # SOGL (SOGLOnline commit 2940dbe), 15 bytes ø─Ζ┘Χ⁴‘5n{.∙.*T To run this, download this and run the code in the index.html file. Uses that at that commit (and before it) * repeated each character, not the whole string. Explanation: ø─Ζ┘Χ⁴‘ push " # # ##### # # ##### # # " 5n split into lines of length 5 { for each line do .∙ multiply vertically input times .* multiply horizontally input times T output in a new line Bonus: add 2 inputs for separate X and Y length! • "commit 2940dbe" - I like that idea. Can you explain why ø─Ζ┘Χ⁴‘ pushes that though? – Magic Octopus Urn Aug 17 '17 at 20:54 • @MagicOctopusUrn That's SOGLs compression, which here stores a dictionary of " " and # and the base-2 data required for that string. – dzaima Aug 17 '17 at 21:12 • Neat, is it stable enough for me to start using :)? – Magic Octopus Urn Aug 17 '17 at 21:40 • @MagicOctopusUrn Well it's pretty stable as there have been no answer-breaking changes since SOGLOnline, but whether you can use it (as in understand it) is another question. You can try though and ask question in TNB – dzaima Aug 17 '17 at 21:44 • Haha... Ill wait for documentation then. I do need coddled a little. – Magic Octopus Urn Aug 17 '17 at 21:45 # brainfuck, 224 bytes ,[->+>>>>+<<<<<]>>>+>+++++[-<<<[->+>>>>>+>+++++[-<<<[->+<<<[->>>>>>+<<[->>>+>---<<<<]<<<<]>>>>>>[-<<<<<<+>>>>>>]>[-<<<+>>>]+++++[->+++++++<]>.[-]<<<<<<]>[-<+>]>[-<->]<+[->+<]>>]<<++++++++++.[-]<<<<<]>[-<+>]>[-<->]<+[->+<]>>] Try it online! ## Making-of I tried to build this code by hand and spent quite a few hours, so I decided to make a transpiler in Python. Here is the code I entered to make this code: read(0) copy(0,(1,1),(5,1)) add(3,1) add(4,5) loop(4) loop(1) add(2,1) add(7,1) add(8,5) loop(8) loop(5) add(6,1) loop(3) add(9,1) loop(7) add(10,1) add(11,-3) end(7) end(3) copy(9,(3,1)) copy(10,(7,1)) add(10,5) copy(10,(11,7)) write(11) clear(11) end(5) copy(6,(5,1)) copy(7,(6,-1)) add(6,1) copy(6,(7,1)) end(8) add(6,10) write(6) clear(6) end(1) copy(2,(1,1)) copy(3,(2,-1)) add(2,1) copy(2,(3,1)) end(4) Try it online! # C (gcc), 98 93 bytes 5 bytes thanks to Felix Palmen. i,j;main(a){for(scanf("%d",&a);j<a*5||(j=!puts(""),++i<a*5);)putchar(i/a+1&j++/a+1&1?32:35);} Try it online! # Gaia, 9 bytes # ”ṫ&:Ṁ‡ Pretty much a port of Zgarb's great answer Try it online! (the footer is just to pretty print, the program itself returns a 2D list of characters) ### Explanation # ” Push the string " # " ṫ Bounce, giving " # # " & Repeat each character by input : Copy Ṁ‡ Tabled maximum with itself # Befunge, 105 103 bytes p9p&5*08p>08g- v >,g1+:00p^v0:/5g8_p9g1+:09p08g-#v_@ gvg90\/\g0< # # ##### 0:/5g8,+55< ^>\/2%5*+92++2 Try it online! • Could you add a link to an online interpreter? tio.run/#befunge is a good one AFAIK. – James Aug 18 '17 at 4:54 # Python, 8884 77 bytes lambda x:[[(' #'[i//x%2]+'#')[j//x%2]for j in range(5*x)]for i in range(5*x)] Try it online! Returns 2D list of characters. # PowerShell, 726863 60 bytes param($a)(,($x=,((' '*$a+"#"*$a)*2)*$a)+,("#"*5*$a)*$a)*2;$x Try it online! Takes input $a. Then, we do a bunch of magic string and array manipulation.
(,($x=,((' '*$a+"#"*$a)*2)*$a)+,("#"*5*$a)*$a)*2;$x ' '*$a+"#"*$a # Construct a string of spaces and # ( )*2 # Repeat it twice ,( )*$a # Repeat that $a times to get the top as an array ($x= ) # Store that into $x and immediately output it , + # Array concatenate that with ... ,("#"*5*$a) # another string, the middle bar ...
*$a # repeated$a times.
( )*2; # Do that twice
$x # Output$x again
You can peel off the parts of the explanation starting from the bottom to see how the output is constructed, so hopefully my explanation makes sense.
a#b=a++b++a++b++a
c%l=((c<$l)#('#'<$l))<$l f n=(' '%[1..n])#('#'%[1..n]) Returns a list of strings. Try it online! How it works: a#b=a++b++a++b++a -- concatenate the strings a and b in the given pattern c%l= -- take a char c and a list l (we only use the length -- of l, the actual content doesn't matter) c<$l -- make length l copies of c
'#'<$l -- make length l copies of '#' # -- combine them via function # <$l -- and make length l copies of that string
f n= -- main function
# -- make the "a b a b a" pattern with the strings
-- returned by the calls to function %
' '%[1..n] -- one time with a space
'#'%[1..n] -- one time with a '#'
# Mathematica, 63 bytes
ArrayFlatten@Array[x=#;Table[If[OddQ@-##," ","#"],x,x]&,{5,5}]&
### Explanation
ArrayFlatten@Array[x=#;Table[If[OddQ@-##," ","#"],x,x]&,{5,5}]& (* input N *)
x=# (* Set x to N *)
& (* A function that takes two inputs: *)
If[OddQ@-##," ","#"] (* if both inputs are odd (1), " ". "#" otherwise *)
Table[ ,x,x] (* Make N x N array of that string *)
Array[ ,{5,5}] (* Make a 5 x 5 array, applying that function to each index *)
ArrayFlatten@ (* Flatten into 2D array *)
(1)-## parses into Times[-1, ##]
• ArrayFlatten is very nice. – Mark S. Aug 20 '17 at 2:48
# Python 2, 113 bytes
As an array of strings:
r=[1-1*(i%(2*n)<n)for i in range(5*n)]
print[''.join(' #'[r[k]+r[j]>0]for k in range(len(r)))for j in range(n*5)]
As ASCII art:
# Python 3, 115 bytes
r=[1-1*(i%(2*n)<n)for i in range(5*n)]
for j in range(n*5):print(*(' #'[r[k]+r[j]>0]for k in range(len(r))),sep='')
# Python 3, 117 bytes
p=range(5*n)
for i,e in enumerate([j%(2*n)>=n for j in p]for k in p):print(*[' #'[i%(2*n)>=n or k]for k in e],sep='')
As an array of booleans
# Python 2, 75 bytes
p=range(5*n)
f=lambda o:o%(2*n)>=n
print[[f(j)or f(i)for j in p]for i in p]
• Long time, no see :-) – ETHproductions Aug 17 '17 at 22:55
• Yes, it has! @ETHproductions – Zach Gates Aug 17 '17 at 23:01
# Java 8, 103 bytes
Lambda accepts Integer and prints the octothorpe to standard out. Cast to Consumer<Integer>.
n->{for(int s=5*n,x=0,y;x<s;x++)for(y=0;y<s;)System.out.print((x/n%2+y++/n%2>0?'#':32)+(y<s?"":"\n"));}
Try It Online
## Ungolfed lambda
n -> {
for (
int
s = 5 * n,
x = 0,
y
;
x < s;
x++
)
for (y = 0; y < s; )
System.out.print(
(x / n % 2 + y++ / n % 2 > 0 ? '#' : 32)
+ (y < s ? "" : "\n")
);
}
The key observation here is that, on a 5 by 5 grid of n by n cells, octothorpes appear wherever the row or column number (0-based) is odd. I'm pretty sure this is the cheapest general approach, but it seems further golfable.
## Acknowledgments
• -1 byte thanks to Kevin Cruijssen
• You can place the int s=5*n,x=0,y instead the for-loop to save a byte on the semicolon. – Kevin Cruijssen Aug 18 '17 at 6:40
# Pyth, 28 22 bytes
-6 bytes thanks to @LeakyNun
JsC*]S5Qjmsm@"# "*kdJJ
Test Suite.
• 22 bytes – Leaky Nun Aug 18 '17 at 13:59
• @LeakyNun Thanks, I didn't really golf this answer so far. – Mr. Xcoder Aug 18 '17 at 14:00
# R, 8785 62 bytes
m=matrix(" ",x<-scan()*5,x);m[s,]=m[,s<-rep(!1:0,e=x/5)]="#";m
2 bytes saved by representing c(F,T) as !1:0, thanks to LeakyNun
23 bytes saved thanks to Giuseppe
Try it online!
Explanation (ungolfed):
x=scan()*5 # Multiply input by 5 to get the required width/height of the matrix
m=matrix(" ",x,x) # Create a matrix of the required dimensions
s=rep(!1:0,each=x/5) # The sequence s consists of F repeated n times, followed by T repeated n times
m[s,]="#" # Use s as logical indices to set those rows as "#" characters.
# R recycles the sequence to the height of the matrix.
m[,s]="#" # Same, with columns
write(m,"",x,,"") # Print out across the required number of columns
` |
# A5. Experimental Analyses of Binding
It is often important to determine the Kd for a ML complex, since given that number and the concentrations of M and L in the system, we can predict if M is bound or not under physiological conditions. Again, this is important since whether M is bound or free will govern its activity. The trick in determining Kd is to determine ML and L at equilibrium. How can we differentiate free from bound ligand? The following techniques allow such a differentiation.
### TECHNIQUES THAT REQUIRE SEPARATION OF BOUND FROM FREE LIGAND -
Care must be given to ensure that the equilibrium of $$M + L \rightleftharpoons ML$$ is not shifted during the separation technique.
• gel filtration chromatography - Add M to a given concentration of L. Then elute the mixture on a gel filtration column, eluting with the free ligand at the same concentration. The ML complex will elute first and can be quantitated . If you measure the free ligand coming off the column, it will be constant after the ML elutes with the exception of a single dip near where the free L would elute if the column was eluted without free L in the buffer solution. This dip represents the amount of ligand bound by M.
• membrane filtration - Add M to radiolableled L, equilibrate, and then filter through a filter which binds M and ML. For instance, a nitrocellulose membrane binds proteins irreversibly. Determine the amount of radiolabeled L on the membrane which equals [ML].
• precipitation - Add a precipitating agent like ammonium sulfate, which precipitates proteins and hence both M and ML. Determine the amount of ML.
### TECHNIQUES THAT DO NOT REQUIRE SEPARATION OF BOUND FROM FREE LIGAND
• concentrations ligand gives ML. Repeat at many different stoichiometry, which for a 1:1 ligande, determine the amount of bound or spectroscopic techniques. At equilibrium, determine free L by sampling the solution surrounding the bag. By mass balancradioisotopic whose concentration can be determined using ligand- Place M in a dialysis bag and dialyze against a solution containing a equilibirum dialysis.
• spectroscopy - Find a ligand whose absorbance or fluorescence spectra changes when bound to M. Alternatively, monitor a group on M whose absorbance or fluorescence spectra changes when bound to L.
• isothermal titration calorimetry (ITC)- In ITC, a high concentration solution of an analyte (ligand) is injected into a cell containing a solution of a binding partner (typically a macromolecule like a protein, nucleic acid, vesicle).
Figure: Isothermal Titration Calorimeter Cells
On binding, heat is either released (exothermic reaction) or adsorbed, causing a small temperature changes in the sample cell compared to the reference cells containing just a buffer solution. Sensitive thermocouples measure the temperature difference (DT1) between the sample and reference cells and apply a current to maintain the difference at a constant value. Multiple injections are made until the macromolecules is saturated with ligand. The enthalpy change is directly proportional to the amount of ligand bound at each injection so the observed signal attenuates with time. The actual enthalpy change observed must be corrected for the change in enthalpy on simple dilution of the ligand into buffer solution alone, determined in a separate experiment. The enthalpy changes observed after the macromolecule is saturated with ligand should be the same as the enthalpy of dilution of the ligand. A binding curve showing enthalpy change as a function of the molar ratio of ligand to binding partner ($$L_o/M_o$$ if $$L_o \gg M_o$$) is then made and mathematically analyzed to determine Kd and the stoichiometry of binding.
Figure: Typical isothermal titration calorimetry data and analysis Reference: http://www.microcalorimetry.com/index.php?id=312
It should be clear in the example above, that the binding reaction is exothermic. But why is the graph of ΔH vs molar ratio of Lo/Mo sigmoidal (s-shaped) and not hyperbolic? One clue comes from the fact that the molar ratio of ligand (titrant) to macromolecule centers around 1 so, as explained above, when Lo is not >> Mo, the graph might not hyperbolic. The graphs below show a specific example of a Kd and ΔHo being calculated from the titration calorimetry data. They will shed light on why the graph of ΔH vs molar ratio of Lo/Mo is sigmoidal.
A specific example illustrates these ideas. Soluble versions of the HIV viral membrane protein, gp 120, 4 μM, was placed in the calorimetry cell, and a soluble form of its natural ligand, CD4, a membrane receptor protein from T helper cells, was placed a syringe and titrated into the cell (Myszka et al. 2000). Enthalpy changes/injection were determined and the data was transformed and fit to an equation which shows the ΔH "normalized to the number of moles of ligand (CH4) injected at each step". The line fit to the data in that panel is the best fit line assuming a 1:1 stoichiometry of CD4 (the "ligand") to gp 120 (the "macromolecule") and a Kd = 190 nM. Please note that the curve is sigmoidal, not hyperbolic.
Figure: Titration Calorimetry determination of Kd and DH for the interaction of gp120 and CD4
Note that the stoichiometry of binding (n), the KD, the ΔHo can be determined in a single experiment. From the value of ΔHo and KD, and the relationship
$ΔGo = -RTlnKeq = RTlnKD = ΔHo - TΔSo$
the ΔGo and ΔSo values can be calculated. No separation of bound from free is required. Enthalpy changes on binding were calculated to be -62 kcal/mol.
Using the standard binding equations (5, 7, and 10 above) to calculate free L and ML at a vary of Lo concentrations and R = Lo/Mo ratios, a series of plots can be derived. Two were shown earlier in this Chapter section to illustrate differences in Y vs L and Y vs Lo when Lo is not >> Mo. They are shown again below:
Figure: Y vs L and Y vs Lo when Lo is not >> Mo
Next a plot of ML vs R (= [Lo]/[Mo] (below, panel A1 right) was made. This curve appears hyperbolic but it has the same shape as the Y vs Lo graph above (right). However if the amount of ligand bound at each injection (calculated by subtracting [ML] for injection i+1 from [ML] for injection i) is plotted vs R (= [Lo]/[Mo]), a sigmoidal curve (below, panel A2, left) is seen, which resemble that best fit graph for the experimentally determine enthalpies above. The relative enthalpy change for each injection is shown in red. Note the graph in A2 actually shows the negative of the amount of ligand bound per injection, to make the graph look the that in the graph showing the actual titration calorimetry trace and fit above.
Figure: Binding Curves that Explain Sigmoidal Titration Calorimetry Data for gp120 and CD4
### Surface Plasmon Resonance
A newer technique to measure binding is called surface plasmon resonance (SPR) using a sensor chip consisting of a 50 nm layer of gold on a glass surface. A carbohydrate matrix is then added to the gold surface. To the CHO matrix is attached through covalent chemistry a macromolecle which contains a binding site of a ligand. The binding site on the macromolecule must not be perturbed to any significant extent. A liquid containing the ligand is flowed over the binding surface.
The detection system consists of a light beam that passes through a prism on top of the glass layer. The light is totally reflected but another component of the wave called an evanescent wave, passes into the gold layer, where it can excite the Au electrons. If the correct wavelength and angle is chosen, a resonant wave of excited electrons (plasmon resonance) is produced at the gold surface, decreasing the total intensity of the reflected wave. The angle of the SPR is sensitive to the layers attached to the gold. Binding and dissociation of ligand is sufficient to change the SPR angle, as seen in the figure below.
• Fig: Surface Plasmon Resonance. Image used with permission (CC BY-SA 3.0; SariSabban)
animation: SPR evanaescent wave
This technique can distinguish fast and slow binding/dissociation of ligands (as reflected in on and off rates) and be used to determine Kd values (through measurement of the amount of ligand bond at a given total concentration of ligand or more indirectly through determination of both kon and koff.
Binding DB: a database of measured binding affinities, focusing chiefly on the interactions of protein considered to be drug-targets with small, drug-like molecules
PDBBind-CN: a comprehensive collection of the experimentally measured binding affinity data for all types of biomolecular complexes deposited in the Protein Data Bank (PDB). |
# what happens when you apply logarithm transformation to a data?
I ran into the following question:
what happens when you apply logarithm transformation to a data?
-
One thing that is often done is that each value $y_i$ of the variable on the vertical axis is replaced not with $\log y_i$, but with $\mathrm{GM}(y)\cdot\log y_i$, where GM is the geometric mean $(y_1\cdots y_n)^{1/n}$. That way the transformed and untransformed data are both measured in the same units, so that you can make sense of such statements as that one of them had a smaller sum of squares of residuals and is therefore a better fit. – Michael Hardy Oct 27 '12 at 21:04
$y=cx^n\implies \ln y=n\ln x + \ln c$. So any power law becomes a linear equation in logarithms, so the best possible values of $c$ and $n$ can be solved for by least squares methods or other similar things, which is normally much easier than estimating values directly from the original data. – Robert Mastragostino Oct 27 '12 at 23:12 |
Lemma 15.115.8. Let $A$ be a discrete valuation ring. Assume the reside field $\kappa _ A$ has characteristic $p > 0$ and that $a \in A$ is an element whose residue class in $\kappa _ A$ is not a $p$th power. Then $a$ is not a $p$th power in $K$ and the integral closure of $A$ in $K[a^{1/p}]$ is the ring $A[a^{1/p}]$ which is a discrete valuation ring weakly unramified over $A$.
Proof. This lemma proves itself. $\square$
There are also:
• 2 comment(s) on Section 15.115: Eliminating ramification
In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar). |
## Independent Efficacy Study
Please complete the short form below to request your copy of the Independent Efficacy Study. |
# 기계학습 응용 및 학습 알고리즘 성능 개선방안 사례연구
• Accepted : 2016.02.20
• Published : 2016.02.28
#### Abstract
This paper aims to present the way to bring about significant results through performance improvement of learning algorithm in the research applying to machine learning. Research papers showing the results from machine learning methods were collected as data for this case study. In addition, suitable machine learning methods for each field were selected and suggested in this paper. As a result, SVM for engineering, decision-making tree algorithm for medical science, and SVM for other fields showed their efficiency in terms of their frequent use cases and classification/prediction. By analyzing cases of machine learning application, general characterization of application plans is drawn. Machine learning application has three steps: (1) data collection; (2) data learning through algorithm; and (3) significance test on algorithm. Performance is improved in each step by combining algorithm. Ways of performance improvement are classified as multiple machine learning structure modeling, $+{\alpha}$ machine learning structure modeling, and so forth.
#### Acknowledgement
Supported by : 서울여자대학교 컴퓨터과학연구소
#### References
1. SAS Institute, "Machine Learning: What it is & why it matters", http://www.sas.com/en_us/insights/analytics/machine-learning.html (December 1, 2015)
2. A. L. Samuel, "Some Studies in Machine Learning Using the Game of Checkers. IBM Journal of Research and Development", Vol. 3, No. 3, 1959.
3. B.-T. Zhang, "Next-Generation Machine Learning Technologies. Journal of Computing Science and Engineering", Vol. 25, No. 3, pp. 96-107, 2007.
4. Jingu Lee, "Comparative Study of Various Machine Learning Techniques for Infrasound Signals Associated with Volcanic Eruptions. Master's thesis". Korea University. 2014.
5. A. Cannata, P. Montalto, M. Aliotta, C. Cassisi, A. Pulvirenti, E. Privitera and D. Patane, "Clustering and classification of infrasonic events at Mount Etna using pattern recognition technique. Geophysical Journal International", Vol. 185, No. 1, pp. 253-264, 2011. https://doi.org/10.1111/j.1365-246X.2011.04951.x
6. Sangbeom Kim, "A Study on Constitutional Classification Using Speech Features and Machine Learning Methods. Master's thesis", Daejon University. 2012.
7. Jeongmin Choi, "Vision based self learning mobile robot based on machine learning algorithm. Master's thesis", Chungnam National University. 2009.
8. C. Gaskett, L. Fletcher, and A. Zelinsky, "Reinforcement Learning for a Vision Based Mobile Robot. Intelligent Robots and Systems, IEEE International Conference on", pp. 403-409, 2000.
9. C. V. Regueiro, J. E. Domenech, R. Iglesias, and J. Correa, "Acquiring contour following behavior in robotics through Q-learning and image-based states. PWASET", Vol 15, 2006.
10. Sangjun Park, "Content-Based Classification of Musical Genre using Machine Learning. Master's thesis", Seoul National University. 2002.
11. Shin Hwi Yun, "Estimation of Vessel Service Time Base on Machine Learning. Master's thesis", Pusan National University. 2009.
12. Mi-Sun Moon, Kang Song, and Dong-Ho Song, "Aviation Application: UAS Automatic Control Parameter Tuning System using Machine Learning Module. Journal of the Korean Institute of Navigation", Vol. 14, No. 6, pp. 874-881, 2010.
13. Y. Abe, M. Konosho, J. Imai, R. Hasagawa, M. Watanabe and H. Kamiio, "PID Gain Tuning Method for Oil Refining Controller based on Neural Networks. Proc. of the Second Intl. Conf. on Innovative Computing, Information and Control", 2007.
14. J. Lu, O. Ling and J. Zhang, "Lateral Control LawDesign for Helicopter Using Radial Basis Function Neural Network. Proc of the IEEE Intl. Conf. of Automation and Logistic", August. 2007.
15. Gyeong-Woo Gang, "Development of bio-signal based eye-tracking system using dual machine learning structure. Master's thesis", Catholic University. 2013.
16. Hyunsin Park, Sungwoong Kim, Minho Jin, and Chang D. Yoo, "The latest machine learning-based speech recognition technology trends. The Magazine of the IEEK", Vol. 41, No. 3, pp. 18-27, 2014.
17. Haneol Kim, "Machine learning to detect garbage collecting SSDs and its use to increase performance predictability. Master's thesis", Hongik University. 2015.
18. Dongjin Jung, "A Study on Effectiveness of Machine Learning Method for Fall Detection based on Extracted Feature from Acceleration Data. Master's thesis", Inje University. 2015.
19. Byoung-Won Min, Yong-Sun Oh, "Improvement of Personalized Diagnosis Method for U-Health. Journal of Korea Contents Association", Vol. 10, No. 10, pp. 54-67, 2010. https://doi.org/10.5392/JKCA.10.10.054
20. Sang Cheol Park, Myung Eun Lee, Soo Hyung Kim, In Seop Na, and Yanjuan Chen, "Machine Learning for Medical Image Analysis. Journal of Computing Science and Engineering", Vol. 39, No. 3, pp. 163-174, 2012.
21. S. C. Park, J. Pu, and B. Zheng, "Improving Performance of Computer-Aided Detection Scheme by Combining Results from Two Machine Learning Classifiers. Academic Radiology", Vol. 16, No. 3, pp. 266-274, 2009. https://doi.org/10.1016/j.acra.2008.08.012
22. Seunghak Yu, Sangheon Baek, and Seonro Yun, "Survey of Analysis Methods for Understanding Gene Expression Regulation Mechanisms Using Ensemble Learning. Journal of Computing Science and Engineering", Vol. 32, No. 10, pp. 38-43, 2014.
23. Dongyoung Kim, Jeawon Park, and Jaehyun Choi, "A Comparative Study between Stock Price Prediction Models Using Sentiment Analysis and Machine Learning Based on SNS and News Articles. Journal of Information Technology Services," Vol. 13, No. 3, pp. 221-233, 2014.
24. Joa-Sang Lim, Jin-Man Kim, "An Empirical Comparison of Machine Learning Models for Classifying Emotions in Korean Twitter. Journal of Korea Multimedia Society", Vol. 17, No. 2, pp. 232-239, 2014. https://doi.org/10.9717/kmms.2014.17.2.232
25. L. Zhang, R. Ghosh, M. Dekhil, M. Hsu, and B. Liu, "Combining Lexicon-based and Learning-based Methods for Twitter Sentiment Analysis. Technical Report HPL-2011, HP Laboratories", Vol. 89, 2011.
26. Choen Lee Jeong, "Application of Artificial Neural Networks Technique for the Improvement of Flood Forecasting and Warning System. Ph. D. dissertation", Dongshin University. 2010.
27. Coulibaly P, Anctil F, and Bobee B, "Daily Reservoir Inflow Forecasting using Artificial Neural Networks with stopped Training Approach. Journal of Hydrology", Vol. 230, No. 3, pp. 224-257, 2000.
28. French, M. N., Krajewski, W. F., and Cuykendall R. R., "Rainfall forecasting in space and time using a neural network. Journal of Hydrology", Vol. 137, pp. 1-31, 1992. https://doi.org/10.1016/0022-1694(92)90046-X
29. Jae-Hyun Seo, Yong-Hyuk Kim, "A Survey on Rainfall Forecast Algorithms Based on Machine Learning Technique. Proceedings of KIIS Fall Conference", Vol. 21, No. 2, pp. 218-221, 2011.
30. M. N. French, W. F. Krajewski, and R. R. Cuykendall, "Rainfall Forecasting in Space and Time Using a Neural Network. Journal of Hydrology", Vol. 137, No. 1-4, pp. 1-31, 1992. https://doi.org/10.1016/0022-1694(92)90046-X
31. Junyeob Yim, Byung-Yeon Hwang, "Predicting Movie Success based on Machine Learning Using Twitter. Journal of Korea Information Processing Society", Vol. 3, No. 7, pp. 263-270, 2014.
32. Seong-Jin Kim, Cheol-Young Ock, "Analysis of Korean Language Parsing System and Speed Improvement of Machine Learning using Feature Module. Journal of The Institute of Electronics and Information Engineers", Vol. 51, No. 8, pp. 66-74, 2014. https://doi.org/10.5573/IEIE.2014.51.8.066
33. Miikkulainen, R. and Dyer, M. G, "Natural Language processing with modular neural networks and distributed lexicon. Cognitive Science", Vol. 15, No. 3, pp. 343-399, 1991. https://doi.org/10.1207/s15516709cog1503_2
34. ChungHee Lee, YoungHoon Seo, and HyunKi Kim, "Competition Relation Extraction based on Combining Machine Learning and Filtering. Journal of Computing Science and Engineering", Vol. 42, No. 3, pp. 367-378, 2015.
35. Jaeuk Seol, "Clinical causal relationship extraction based on machine learning and rule from discharge summaries. Master's thesis", Chonbuk National University. 2014.
#### Cited by
1. Development of Heavy Rain Damage Prediction Functions in the Seoul Capital Area Using Machine Learning Techniques vol.18, pp.7, 2018, https://doi.org/10.9798/KOSHAM.2018.18.7.435 |
Required field
Required field
Required field
# Incompressible LBM (Lattice Boltzmann Method) Advanced Options
This document focuses on some important aspects of the Incompressible LBM (lattice Boltzmann method) analysis type in SimScale in detail.
## Geometry
The Pacefish®$$^1$$ LBM solver has the ability to deal with many CAD types and is generally more robust than many solvers in terms of cleanliness of the geometry, where open geometry, poor faces, and small faces don’t really matter. That said, on the odd occasion you have issues or errors, if you inspect the geometry and don’t find anything fundamentally wrong, a *.stl can normally be loaded.
Optimize for PWC/LBM
This option allows you to import a *.stl file that is optimized for the Incompressible LBM and Wind Comfort analysis types. It leaves out complex import steps like sewing and cleanup that are not required by the LBM solver and therefore also allows to import big and complex models fast.
## Wall Modeling
### Y+ Requirements
The Y+ requirements for LBM tend to be more robust than those of the equivalent finite volume methods, for example, the K-omega SST (uRANS) model in the FVM implementation has an approximate requirement of 30 < Y+ < 300, however in the SimScale’s LBM implementation the lower bound is not considered a requirement and instead a more robust upper bound of less than 500 and certainly not higher than 1000 is recommended. The solver will additionally warn for Y+ values higher than 2000 for in the near-wall voxel.
If the Y+ is much higher than expected where results are likely to be impacted the user will be warned in the interface as follows:
Warning
High velocities encountered that might not be handled by the current mesh resolution. Please check your results and consider refining the mesh further.
OR
Mesh resolution might not be sufficient for correct turbulence modeling. Please check your results and consider refining the mesh.
And in the solver log with an error:
Warning
ERROR @ DomainHealthStatusExporter.cpp:60: simulationTime=748, domainHealthStatus=(maxVelMag=0.323367, minRho=0.697381, maxRho=1.48795, maxNuT=0.000625238, maxWallCellSizeYp=102696)
This shows a maximum Y+ of 100k which is obviously wrong and needs to be reduced. The main methods of doing this involve application of Reynolds scaling (see the section below) or to refine the surface. If the surface is already refined to a reasonable level, scaling is the only option without excessively increasing the cost of your simulation.
Regarding Y+ targets, Pacefish®$$^1$$ is much more flexible than FVM codes with wall functions. It has no limitation regarding the low-bound value. The results should not suffer from wall resolution as long as the size of the wall-next voxels is not exceeding 500 to 1000.
## Turbulence Models in Incompressible LBM
### K-omega SST Model
From Figure 2 we can see the different regions of the boundary layer and why when modelling the layer it’s encouraged to avoid the Log Law Region. However, K-omega SST models the layer up until the first cell then solves from there on, and this model has been proven to be very accurate in many industries including the aerospace industry.
Although the K-omega SST model is highly accurate, more accurate models exist, namely LES (Large Eddy Simulation) and DES (Detached Eddy Simulation).
### LES Models
LES is more accurate as it models only eddies smaller than the grid filter and solves the flow regime larger than the grid filter size. Amongst one of its downfalls is its inability in its standard form to model walls, therefore requiring a very fine mesh, or simply dealing with flows where wall interactions are least predominant. Pure LES models such as the ‘LES Smagorinsky’ model require similar Y+ requirements to the equivalent FVM model, where Y+ is around or below 1. This is one of the main reasons LES will be a more expensive simulation.
However, if a wall model were to be added, we could obtain the accuracy improvements without the requirement of such a fine mesh, and this is where the advantage of DES or Detached Eddy Simulation comes in.
Smagorinsky (direct) turbulence model
Besides the traditional Smagorinsky model, SimScale also offers Smagorinsky (direct) turbulence model.
“Smagorinsky” model strictly follows the original formulation and LES idea. “Smagorinsky (direct)” is a bit cheaper, but a bit modified. For “Smagorinsky Direct” only the LBM mesh has to be computed during a time-step, while for the “Smagorinksy” the LBM and the Finite-Difference meshes have to be computed during a time-step making it comparatively costlier.
### DES Models
DES turbulence models are a hybrid LES-uRANS model that uses RANS formulation in the boundary layer and LES formulation in the far-field achieving an optimum between both worlds. In the LBM solver, two detached eddy models are available, the K-omega SST DDES (Delayed Detached Eddy Simulation) and the K-omega SST IDDES (Improved Delayed Detached Eddy Simulation).
The DES models ‘K-omega SST DDES’ and ‘K-omega SST IDDES’ have similar wall requirements to the uRANS ‘K-omega SST’ since the wall model is based upon the same model however, at some point the near-wall region transitions from K-omega SST to LES.
The difference between DDES and IDDES is that IDDES blends from uRANS to LES in the buffer region which can be approximated to be somewhere between 5 < Y+ < 30, whereas the DDES model blends from uRANS to LES in the log-law region 30 < Y+. Therefore, depending upon the Y+ values of your mesh choose the appropriate DES turbulence model. For example, if your Y+ is around 100, then the DDES model would be better, however, if the Y+ is below 5, the IDDES would be more suited than DDES.
Since the K-omega SST model probably swallows some of the transient effects and you are tempted to use the plain Smagorinsky model, make sure that wall resolution has to be around or below Y+ of 1.’ – (Eugen, 2018)$$^1$$. However, when you are simply rerunning the simulation for improved results without refining the mesh please consider using the DES turbulence models available in SimScale instead of the plain K-omega SST and Smagorinsky models.
## Reynolds Scaling Factor
It is common to scale down a model physically for wind tunnel testing or to slow down a flow or change other flow parameters. Examples of such a requirement include testing a scaled building or a plane in subsonic flows. In SimScale, the Reynolds scaling factor (RSF) can apply this scaling automatically to a full-scale geometry.
Not only is this scaling important in wind tunnels for obvious sizing reasons, but it is also required in the LBM method, where a high Reynolds number will create a thin boundary layer which will also need a finer mesh to compensate. Since the LBM requires a lattice where the aspect ratio is 1, a perfect cube, refining to required Y+ values may become expensive. On top of that, if you were to refine to the required level at the surface without scaling, then because the Courant number is being maintained at a value lower than 1, the number of time steps required for the same time scale would increase further increasing the simulation expense.
The depicted validation case, AIJ Case E, for pedestrian wind comfort is compared to a wind tunnel where the scale of the city is 1:250, and a scaling factor of 0.004 could be used, or alternatively, we can use auto meshing where the RSF is applied automatically. If dealing with a high Reynolds number it is recommended that some literature review is used to understand an acceptable scaling factor for the application, or if in research, choose the matching scale factor to the wind tunnel you are comparing to.
The Reynolds scaling factor can be accessed under Mesh settings > Manual only in an Incompressible LBM analysis type.
The Reynolds number is defined as $$Re = U L/ \nu$$ where $$L$$ is the reference length, $$U$$ is the velocity, and $$\nu$$ is the kinematic viscosity of the fluid. When a scaling factor is applied, instead of sizing the geometry down, the viscosity is increased to ensure that the Reynolds number is reduced to the correct scaling.
## Meshing
We applied a simple rule of thumb where the mesh of the worst resolved solid is still maximum two refinement levels below the best resolved solid. Because memory consumption scales with second order and computation effort scales with third order you already will have a huge saving in relation to resolving all solids at the highest refinement level (93% less memory and 99% less computation time), but at the same time have stable (not-changing) resolution at the wall getting rid of numeric effects at the transitions.
Please consider grid transition at solids an expensive operation in terms of results quality even if no computation errors occur and you do not directly see the effects. This means you can use it, but do it carefully. Try best to maintain the same refinement level for solids as far as possible. Just try to follow the above mentioned rule of thumb using a VoxelizedVolume with unidirectional extrusion size of 4 voxels and directional downstream extrusion of 16 voxels and you will get very good geometry-adapted meshes being a lot better suited for the simulation in almost any case than those refinement regions build of manual boxes. Generally, consider refinement boxes a tool from the Navier-Stokes world. They still work for Pacefish, but VoxelizedVolumes work much better.
## Results
Ordinary Finite Volume Method based solvers usually run in the steady-state, and usually on grids sub 20 million, so saving the entire results for the final step is no issue. However, on the LBM solver, it’s normal to have grids bigger than 100 million cells, and since it’s transient, results are computed at every time step. The size of a complete result is usually too large to realistically fully output and store.
If the simulation runs out of results storage an error will start appearing in the logs:
Error
FATAL @ EnSightExport.cpp:3679: EnSight data export to “export/trans_Pedestrian__PACEFISHSPACE__Level__PACEFISHSPACE__SlicePACEFISH” FAILED because of file I/O issue. Please check the access rights and the available disk space at the destination.
If this starts appearing it is advised to immediately stop the simulation and re-adjust the result controls to reduce the size of the written data, as any further produced data is unlikely to be written and therefore further solve time will not gain you additional results and only waste GPU hours.
### Conservative Approach
Predicting the amount of data written is not an exact science as the results depend upon the mesh size, the export domain size, frequency of transient result write, and the time a simulation is run for. So, although it might be hard to judge, simply being conservative, realistic, and putting thought into what you need at the end of a simulation will likely produce simulation results without error.
Example 1
If you are interested in peak velocities at various points at pedestrian height in a city, you could simply export transient data of the encompassing area, however, to get good transient results many writes will be needed, and realistically, at every time step. This won’t be practical for a well-refined domain exceeding 100 million cells with appropriate wall refinements. An alternative would be to save a region much smaller, such as a slice with a small region height drastically reducing the size of the results.
We could be even more conservative, we could know the points of interest and upload these points as a CSV file and export every time step.
Example 2
In wind loading, where you simply want to understand pressures on the surfaces of the building, you could export fluid and surface data around a city, or reduce it to just the building of interest. Furthermore, we could remove the volume data and only export surface data reducing the size of the results to two dimensions.
In the above two examples, it is up to the user to determine the level of results they require, however, every time you drop a level a significant amount of additional storage space becomes available, leading to highly productive simulation runs.
For this reason, the LBM allows three main methods of result exportation: Transient output, Statistical averaging, and Snapshot. Let’s go through these three options:
### Transient Output
With this result control item, transient results can be saved for a given part of the simulation run, for a given output frequency, for a desired region within the fluid domain. The settings panel looks as follows:
The following parameters need to be specified:
• Write control: It represents the frequency of result output. You can choose between coarse (output every 8 time steps), moderate (4 time steps) and high (2 time steps) resolution besides custom where a desired value can be specified in seconds.
• Fraction from end: It defines the point in simulation from where the extraction of simulation result output begins. For example, Fraction from end of 1 (100%) analyses all data from the beginning of the simulation, however, this might be undesirable since the flow takes some time to initialize and stabilize to a somewhat periodic constant flow. Therefore, numbers such as 0.5 (50% into the simulation) and default 0.2 (20% from end, 80% into the simulation) are better.
• Export flow-domain fields: When toggled-on, simulation data of the flow domain enclosed within the assigned geometry primitives will be exported.
• Export surface fields: When toggled-on, simulation data on the surfaces enclosed within the assigned geometry primitives will be exported.
• Geometry primitives: Assign the regions within which the export of the simulation data will be restricted. Besides the Export Flow Domain, you can assign a cartesian box or a local slice.
Note
Transient results are recommended to be saved for small domains especially if an animation is desired. If your simulation runs out of memory, your simulation will fail, wasting potentially a lot of solve time. So be conservative with the transient output and think about the exact results you need.
### Statistical Averaging
With this result control item, the average of the exported transient output will be calculated for a given fraction from end. For example, for a fraction from end of 0.2 the average of each field value within 20% from the end of the simulation will be computed.
We can’t take every time step for the calculation of the average, as this would be computationally too expensive. Hence, to make the computation effort feasible, we use the Sampling interval that stores results only every 2nd, 4th, or 8th time step (besides custom resolution) for the averaging.
The rest of filters in the settings panel are same as that for transient output discussed above.
### Snapshot
With this result control item, the final state of the transient results can be output. That is no intermediate results can be observed except that at the final time step.
As obvious there are no write filters:
### Forces and Moments
This result control item allows calculating forces and moments in the course of the simulation by integrating the pressure and skin-friction over a boundary. It is possible to select a set of boundaries to calculate the overall force and moment on them.
The following parameters need to be specified:
• Center of rotation: The Center of rotation is commonly defined as the center of mass of the structure. In some simulation projects, it may be convenient to define different coordinates for the center of rotation.
• Write control, Fraction from end: Discussed above.
• Export statistical data: When toggled on, statistical data for the forces and moments will be exported. This includes:
• Minimum, Absolute minimum
• Maximum, Absolute maximum
• Average
• Standard deviation
• Root mean square
• Group assignments: When toggled on, the total sum of forces and moments acting on all the assigned surfaces will be calculated. When toggled-off calculation will be done individually for each face.
### Probe Points
Probe points are useful as velocity measuring devices (virtual hot wires or pitot tubes) or can be added to monitor pressure at a point (virtual pressure tap points) where data for each probe is returned as components of velocity and pressure. Additionally, statistical data on those points can also be exported in the form of a sheet.
The format for specifying the probe plot is:
Label, X ordinate, Y ordinate, Z ordinate
Where an example is:
probe0,8.5,9.25,2.5
probe1,15.0,9.25,2.5
Probe2,20.0,9.25,2.5
This can easily be done in a notepad (.txt), excel or your choice of spreadsheet software which can export in .csv format.
Time steps bigger than the export frequency
It is important to note that if the time steps are bigger than the export frequency, then the data is returned at the rate of the time step size, and the user is warned in the user interface. This is important if doing a spectral analysis with a different frequency than export frequency in the interface. This is true for all the result control items.
References
Last updated: April 7th, 2021 |
• # sudan vertical cylindrical tank fire volume
• Xicheng Science & Technology Building High-tech Development Zone, Zhengzhou, China
• 0086-371-86011881
• [email protected]
• Online Chating
### Tank Volume Calculator
Total volume of a cylinder shaped tank is the area, A, of the circular end times the height, h. A = r 2 where r is the radius which is equal to d/2. Therefore: V(tank) = r 2 h The filled volume of a vertical cylinder tank is just a shorter cylinder with the same radius, r, Tank Volume Calculator - Oil TanksMar 26, 2015 · The tank size calculator on this page is designed for measuring the capacity of a variety of fuel tanks. Alternatively, you can use this tank volume calculator as a water volume calculator if you need to calculate some specific water volume. The functionality of this Greer Tank Calculator | Greer Tank, Welding & SteelIf you need assistance with our tank volume calculator or would like to inquire about any of our services, give us a call today on 1-800-725-8108, , sudan vertical cylindrical tank fire volume Vertical Cylindrical Tank. All dimensions are in inches, the volume is U.S. gallons. Rectangle / Cube Tank. All dimensions are in inches, the volume is U.S. gallons.
### HOW TO CALCULATE THE VOLUMES OF PARTIALLY FULL
cylindrical tanks, either in horizontal or vertical configuration. Consider, for example, a cylindrical tank with length L and radius R, filling up to a height H. If you want to obtain the volume of the liquid that partially fills the tank, you should indicate if the tank is in horizontal or vertical position.Find Tank Wetted Surface Area | Chemical ProcessingFeb 05, 2018 · Find Tank Wetted Surface Area , sudan vertical cylindrical tank fire volume If the vessels elevation and diameter are such that the entire vessel is not within the 25-ft vertical fire zone, a partial surface area calculation is needed [1]. , sudan vertical cylindrical tank fire volume Equations for wetted surface areas of horizontal and vertical cylindrical tanks with conical, guppy and torispherical heads are available , sudan vertical cylindrical tank fire volumeTry Our Tank Volume Calculator To Work Out The Size of , sudan vertical cylindrical tank fire volumeOctanes Tank Volume Calculator makes it really easy to work out the volume of your tank and is totally free for you to use. All you need to do is follow the 4 steps below: 1. Click one of the 3 tabs across the top which represents your tank 2. Select your measurement units 3. Enter your tanks length, width etc 4.
### Tank Volume Calculator - ibec language institute
Try Fusion's free tank volume calculator for industrial mixing. Please fill all required fields Tank Volume Calculator , sudan vertical cylindrical tank fire volume *Program does not calculate liquid volume into upper head on vertical tanks (mixing with liquid in head space is not good practice).Math Forum - Ask Dr. Math Archives: Volume of a TankFinding the Volume of a Tank, a selection of answers from the Dr. Math archives. Volume of a Cylinder What is the volume of the storage tank with a diameter 6m, height 5m? Volume of Liquid in a Cylindrical Tank How can I calculate volume of liquid in a cylinder that's not full and lying horizontally? Units and Cylinder VolumeVERTICAL CYLINDER CALCULATOR - 1728If you want to do calculations for a horizontal cylinder, then go to this link: Horizontal Cylinder Calculator. Example: Inputting tank height = 12, liquid level = 3 and tank diameter = 6, then clicking "Inches" will display the total tank volume in cubic inches and US Gallons and will also show the volume
### Venting Guide for Aboveground Storage Tanks
Venting Guide for Aboveground Storage Tanks , sudan vertical cylindrical tank fire volume Horizontal Cylindrical Storage Tank Vertical Cylindrical Storage Tank Horizontal Rectangular Storage Tank Calculation Tables , sudan vertical cylindrical tank fire volume in event of an exposure fire. In a horizontal tank, the wetted area is calculated as 75% of the exposed surface area. In a verticalCylinder, tube tank calculator: surface area, volume , sudan vertical cylindrical tank fire volumeNow it is a simple matter to find the volume of the cylinder tank as it provides estimation of the total and filled volumes of the water tank, oil tank, reservoirs like a tube tank, and others of horizontal or vertical cylindrical shape. Sometimes it is also needed to estimate a surface size of your tank.Vertical Cylindrical Shaped Tank Contents CalculatorVertical Cylindrical Shaped Tank Contents Calculator , sudan vertical cylindrical tank fire volume How to determine milk tank level and volume from pressure; , sudan vertical cylindrical tank fire volume Product Enquiry. Send us your enquiry for a product associated with this Vertical Cylindrical Shaped Tank Contents Calculator page, , sudan vertical cylindrical tank fire volume
### Vertical Tank Volume Calculator - Alberta
Vertical Tank Volume Calculator . The dimensions for diameter, height and depth should be inside dimensions, otherwise the results will be larger than the real volume. Depth to liquid is the dip-stick measurement from the top of the tank to the surface of the liquid.Volume and Wetted Area of Partially Filled Vertical , sudan vertical cylindrical tank fire volumeNov 04, 2014 · The calculation of the wetted area and volume of a vertical vessel is required for engineering tasks such fire studies and the determination of level alarms and control set points. However the calculation of these parameters is complicated by the geometry of the vessel, particularly the heads. This article details formulae for calculating the wetted area and volume of these vessels for various , sudan vertical cylindrical tank fire volumeTank Volume Calculator - SA OilEnter vertical cylindrical tank dimensions: Diameter Height Measurement. Optional: Enter liquid height to work out approximate tank contents (same measurement type required throughout). Liquid Height. Capacity based upon flat bottom tank , sudan vertical cylindrical tank fire volume Input Volume. Convert From. Convert To Calculate. Result
### Calculating Tank Volume
Calculating Tank Volume Saving time, increasing accuracy By Dan Jones, Ph.D., P.E. alculating fluid volume in a horizontal or vertical cylindrical or elliptical tank can be complicated, depending on fluid height and the shape of the heads (ends) of a horizontal tank or the bottom of a vertical tank.Partially full cylinder, sphere, and cone volume , sudan vertical cylindrical tank fire volumeEquations for Sphere, Cylinder, and Cone Volume (Rade and Westergren, 1990) Discussion of Volume Calculation This web page is designed to compute volumes of storage tanks for engineers and scientists; however, it may be useful to anyone who needs to know the volume of a full or partially full sphere, cylinder, or cone.Power Engineering 3B2 - Chapter 13 Flashcards | Quizletc. the fire tube temperature d. the coil bundle temperature , sudan vertical cylindrical tank fire volume e. the level is always above the expansion tank. A. , sudan vertical cylindrical tank fire volume In a Vertical Cylindrical Heater, they are bottom fired and have a _____ radiant section and the convection section is _____. a. Vertical, horizontal. b. Cylindrical, rectangular.
### Vertical Tanks - www.specialtytankandwelding
Vertical Round Tanks (Double Wall) Specialty Tank & Welding Double Wall Vertical Cylindrical Tanks are the solution to your aboveground containment needs. All are equipped with extra ports for monitoring of the interstice space. Quick and easy to install. Possess the strength and impermeability of steel.Tank calculations - MrExcelDec 19, 2014 · We use what are called tank strappings for some of our additive tanks. They were built from formulas for cylindrical horizontal or vertical tanks. I pugged in the size of the tanks and a strapping was produced. From there I set up a form using vlookups to the strappings. Give me some tank measurements and I'll see if what I have at work will , sudan vertical cylindrical tank fire volumeCompute Fluid Volumes in Vertical TanksDec 18, 2003 · The equations for fluid volumes in vertical cylindrical tanks with concave bottoms are shown on p. 30. The volume of a flat-bottom vertical cylindrical tank may be found using any of these equations and setting a = 0. Radian angular measure must be used for trigonometric functions.
### * Sloped Bottom Tank - arachnoid, sudan vertical cylindrical tank fire volume
The easy part the cylindrical section above the slope, which has a volume of: (1) $\displaystyle v = \pi r^2 h$ v = volume; r = tank radius; h = cylindrical section height; More difficult the tank's sloped section, which lies between the tank's bottom and the top of the slope where the tank RESERVOIR DESIGN AND STORAGE VOLUMEvolume. 9.0.1 Effective Storage . Total tank volume, as measured between the overflow and the tank outlet elevations, may not necessarily equal the effective volume available to the water system. Effective volume is equal to the total volume less any dead storage built into the reservoir. For example, a Tank Calculator | WemacVertical Tanks; Small Bulk Oil Verticals; Waste Oil Tanks; Oval Basement Tanks; Portable Trailer Tanks; Bulk Plant Loading Docks; Special Processing Tanks; Square Tanks. Pickup Truck Tanks; UL Stationary Tanks; Generator Base Tanks; Fire Tested Tanks. Fireguard; Flameshield; Package Fuelers; Accessories; Gallery; Tank Charts; Contact Us
### Calculator to Find liquid volume for vertically mounted , sudan vertical cylindrical tank fire volume
Apr 13, 2017 · Sugar industry or other industries using vertically mounded cylindrical tanks for sealing of vacuum equipment like vacuum condensers and multiple effect evaporator lost body liquid extraction (outlet). Sometimes it has also called mound. In this cylindrical tank sometimes inside of the tank might be separated by vertical plate for easily operation purpose.Tank Volume Calculator - Vertical Cylindrical Tanks - ImperialVertical cylindrical tank volume calculator diagram: Fill Rate Fill Times @ Total Tank Fill Time Current Time to Fill Current Time to Empty If you're cutting blocks, concrete, stone or ANYTHING and there's DUST - DON'T TAKE THE RISK Don't cut it, or cut it wet so , sudan vertical cylindrical tank fire volumeHorizontal Tank Volume Calculations - HagraTools are provided for cylindrical horizontal tanks, and for oval (elliptical) tanks. Horizontal Cylindrical Tank Volume Calculator. Horizontal Oval Tank Volume Calculator. Disclaimer. The calculations on these pages are a purely theoretical exercise! Therefore the outcomes of the calculations on these pages can only be used for indicative , sudan vertical cylindrical tank fire volume
### Math Forum - Ask Dr. Math Archives: Volume of a Tank
Finding the Volume of a Tank, a selection of answers from the Dr. Math archives. Volume of a Cylinder What is the volume of the storage tank with a diameter 6m, height 5m? Volume of Liquid in a Cylindrical Tank How can I calculate volume of liquid in a cylinder that's not full and lying horizontally? Units and Cylinder VolumeTank volume calculator - Apps on Google PlaySep 25, 2018 · The application calculates the volume and weight of a liquide by its filling level, empty and total volume. Calculations are performed for: - Rectangular tank - Horizontal cylindrical tank - Vertical cylindrical tank - Cylindrical tank with conical bottom - Cylindrical tank with truncated conical bottom - Cylindrical tank with spherical bottomTank Calibration Chart Calculator - ODay EquipmentFiberglass Tanks. ODay Equipment provides dome end fiberglass tanks from Xerxes and Containment Solutions. The domes on fiberglass tanks vary by manufacturer. So, here are the manufacturers web sites that have calibration charts specific to their designs. Xerxes Go to the Library tab for PDF versions of their charts.
### L/D Ratio of storage tank - Chemical engineering other , sudan vertical cylindrical tank fire volume
- Desired vapor space volume and tank venting requirements - Minimum Fire protection distance requirements for combustible and flammable liquids , sudan vertical cylindrical tank fire volume (L/D implies horizontal "bullet" like tanks, sudan vertical cylindrical tank fire volume.) What sort of volume / size are you looking at? If you have the space, most vertical cylindrical tanks are bigger diameter than height.Cylindrical Tank ProblemsIn order to find the volume, we would have to find the area of the section covered by the oil at the end of the tank and then multiply by the length of the tank. But, how do you find the area of such a figure? Let's begin by examining the end view of the tank (in general so that we can do it for any size cylindrical tank Venting Guide for Aboveground Storage TanksVenting Guide for Aboveground Storage Tanks , sudan vertical cylindrical tank fire volume Horizontal Cylindrical Storage Tank Vertical Cylindrical Storage Tank Horizontal Rectangular Storage Tank Calculation Tables , sudan vertical cylindrical tank fire volume in event of an exposure fire. In a horizontal tank, the wetted area is calculated as 75% of the exposed surface area. In a vertical
### What is the optimum volume of water tank digester?
What is the optimum volume of water tank digester? , sudan vertical cylindrical tank fire volume but a vertical cylindrical tank design is the most common in the UK and USA (Christodoulides, 2001). The tank depth id often equal to its , sudan vertical cylindrical tank fire volumeSPCC Plan - Calculation Guidance - AsmarkSPCC Plan - Calculation Guidance , sudan vertical cylindrical tank fire volume and two 10,000-gallon vertical tanks (each 10 ft diameter and 15 ft height). The dike walls are , sudan vertical cylindrical tank fire volume 1.5 ft 20,000 gallons 36 ft 60 ft Note: The volume displaced by a cylindrical vertical tank is the tank volume within the containment structure and is equal to the tank footprint multiplied by height of the , sudan vertical cylindrical tank fire volumeFire-tube boiler - WikipediaThe general construction is as a tank of water penetrated by tubes that carry the hot flue gases from the fire. The tank is usually cylindrical for the most partbeing the strongest practical shape for a pressurized containerand this cylindrical tank may be either horizontal or vertical.
### Online calculator: Tank Volume Calculators
Tank Volume Calculators. Use one of these to determine your tank's volume. person_outlinePete Mazzschedule 2016-01-16 10:53:10. Cylindrical Tank Volume Use this calculator to determine your cylindrical tank volume in cubic inches and gallons even if one or both ends are rounded. Especially useful if you've cut the tank in length.Chapter 2. Secondary Containment Facilitythrough 2.3 show how to calculate the volumes of horizontal, cylindrical, vertical, and cone-bottom tanks. ()() 2 0.5 21 2 1 2 2 Horizontal cylindrical tank fluid volume (center section of tank): D2hD h 1 L h n D V Di h s 84 D2 Spherical tank fluid volume (end sections of tank): Vh1.5Dh 3 Total tank
Tags: |
## File: phishsigs_howto.tex
package info (click to toggle)
clamav 0.98.7+dfsg-0+deb6u2
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536537538539540541542543544545546547548549550551552553554555556557558559560561562563564565566567568569570571572573574575576577578579580581582583584585586587588589590591592593594595596597598599600601602603604605606607608609610611612613614615616617618619620621622623 %% LyX 1.5.3 created this file. For more info, see http://www.lyx.org/. %% Do not edit unless you really know what you are doing. \documentclass[a4paper,english,10pt]{article} \usepackage{amssymb} \usepackage{pslatex} \usepackage[T1]{fontenc} \usepackage[dvips]{graphicx} \usepackage{url} \usepackage{fancyhdr} \usepackage{varioref} \usepackage{prettyref} \date{} \begin{document} \title{{\huge Phishing signatures creation HOWTO}} \author{T\"or\"ok Edwin} \maketitle %TODO: define a LaTeX command, instead of using \textsc{RealURL} each time \section{Database file format} \subsection{PDB format} This file contains urls/hosts that are target of phishing attempts. It contains lines in the following format: \begin{verbatim} R[Filter]:RealURL:DisplayedURL[:FuncLevelSpec] H[Filter]:DisplayedHostname[:FuncLevelSpec] \end{verbatim} \begin{description} \item [{R}] regular expression, for the concatenated URL \item [{H}] matches the \verb+DisplayedHostname+ as a simple pattern (literally, no regular expression) \begin{itemize} \item the pattern can match either the full hostname \item or a subdomain of the specified hostname \item to avoid false matches in case of subdomain matches, the engine checks that there is a dot(\verb+.+) or a space(\verb+ +) before the matched portion \end{itemize} \item [{Filter}] is ignored for R and H for compatibility reasons \item [{\textsc{RealURL}}] is the URL the user is sent to, example: \emph{href} attribute of an html anchor (\emph{ tag}) \item [{\textsc{DisplayedURL}}] is the URL description displayed to the user, where its \emph{claimed} they are sent, example: contents of an html anchor (\emph{ tag}) \item [{DisplayedHostname}] is the hostname portion of the \textsc{DisplayedURL} \item [{FuncLevelSpec}] an (optional) functionality level, 2 formats are possible: \begin{itemize} \item \verb+minlevel+ all engines having functionality level >= \verb+minlevel+ will load this line \item \verb+minlevel-maxlevel+ engines with functionality level $>=$ \verb+minlevel+, and $<$ \verb+maxlevel+ will load this line \end{itemize} \end{description} \subsection{GDB format} This file contains URL hashes in the following format: \begin{verbatim} S:P:HostPrefix[:FuncLevelSpec] S:F:Sha256hash[:FuncLevelSpec] S1:P:HostPrefix[:FuncLevelSpec] S1:F:Sha256hash[:FuncLevelSpec] S2:P:HostPrefix[:FuncLevelSpec] S2:F:Sha256hash[:FuncLevelSpec] S:W:Sha256hash[:FuncLevelSpec] \end{verbatim} \begin{description} \item [{S:}] These are hashes for Google Safe Browsing - malware sites, and should not be used for other purposes. \item [{S2:}] These are hashes for Google Safe Browsing - phishing sites, and should not be used for other purposes. \item [{S1:}] Hashes for blacklisting phishing sites. Virus name: Phishing.URL.Blacklisted \item [{S:W}] Locally whitelisted hashes. \item [{HostPrefix}] 4-byte prefix of the sha256 hash of the last 2 or 3 components of the hostname. If prefix doesn't match, no further lookups are performed. \item [{Sha256hash}] sha256 hash of the canonicalized URL, or a sha256 hash of its prefix/suffix according to the Google Safe Browsing Performing Lookups'' rules. There should be a corresponding \verb+:P:HostkeyPrefix+ entry for the hash to be taken into consideration. \end{description} To see which hash/URL matched, look at the \verb+clamscan --debug+ output, and look for the following strings: \verb+Looking up hash+, \verb+prefix matched+, and \verb+Hash matched+. Local whitelisting of .gdb entries can be done by creating a local.gdb file, and adding a line \verb+S:W:+. \subsection{WDB format} This file contains whitelisted url pairs It contains lines in the following format: \begin{verbatim} X:RealURL:DisplayedURL[:FuncLevelSpec] M:RealHostname:DisplayedHostname[:FuncLevelSpec] \end{verbatim} \begin{description} \item [{X}] regular expression, for the \emph{entire URL}, not just the hostname \begin{itemize} \item The regular expression is by default anchored to start-of-line and end-of-line, as if you have used \verb+^RegularExpression\$+ \item A trailing \verb+/+ is automatically added both to the regex, and the input string to avoid false matches \item The regular expression matches the \emph{concatenation} of the \textsc{RealURL}, a colon(\verb+:+), and the \textsc{DisplayedURL} as a single string. It doesn't separately match \textsc{RealURL} and \textsc{DisplayedURL}! \end{itemize} \item [{M}] matches hostname, or subdomain of it, see notes for {H} above \end{description} \subsection{Hints} \begin{itemize} \item empty lines are ignored \item the colons are mandatory \item Don't leave extra spaces on the end of a line! \item if any of the lines don't conform to this format, clamav will abort with a Malformed Database Error \item see section \vref{sub:Extraction-of-realURL,} for more details on \textsc{realURL/displayedURL} \end{itemize} \subsection{Examples of PDB signatures} To check for phishing mails that target amazon.com, or subdomains of amazon.com: \begin{verbatim} H:amazon.com \end{verbatim} To do the same, but for amazon.co.uk: \begin{verbatim} H:amazon.co.uk \end{verbatim} To limit the signatures to certain engine versions: \begin{verbatim} H:amazon.co.uk:20-30 H:amazon.co.uk:20- H:amazon.co.uk:0-20 \end{verbatim} First line: engine versions 20, 21, ..., 29 can load it Second line: engine versions >= 20 can load it Third line: engine versions < 20 can load it In a real situation, you'd probably use the second form. A situation like that would be if you are using a feature of the signatures not available in earlier versions, or if earlier versions have bugs with your signature. Its neither case here, the above examples are for illustrative purposes only. \subsection{Examples of WDB signatures} To allow amazon's country specific domains and amazon.com, to mix domain names in \textsc{DisplayedURL}, and \textsc{RealURL}: \begin{verbatim} X:.+\.amazon\.(at|ca|co\.uk|co\.jp|de|fr)([/?].*)?:.+\.amazon\.com([/?].*)?:17- \end{verbatim} Explanation of this signature: \begin{description} \item [{X:}] this is a regular expression \item [{:17-}] load signature only for engines with functionality level >= 17 (recommended for type X) \end{description} The regular expression is the following (X:, :17- stripped, and a / appended) \begin{verbatim} .+\.amazon\.(at|ca|co\.uk|co\.jp|de|fr)([/?].*)?:.+\.amazon\.com([/?].*)?/ \end{verbatim} Explanation of this regular expression (note that it is a single regular expression, and not 2 regular expressions splitted at the {:}). \begin{itemize} \item \verb;.+; any subdomain of \item \verb;\.amazon\.; domain we are whitelisting (\textsc{RealURL} part) \item \verb;(at|ca|co\.uk|co\.jp|de|fr); country-domains: at, ca, co.uk, co.jp, de, fr \item \verb;([/?].*)?; recomended way to end real url part of whitelist, this protects against embedded URLs (evilurl.example.com/amazon.co.uk/) \item \verb;:; \textsc{RealURL} and \textsc{DisplayedURL} are concatenated via a {:}, so match a literal {:} here \item \verb;.+; any subdomain of \item \verb;\.amazon\.com; whitelisted DisplayedURL \item \verb;([/?].*)?; recommended way to end displayed url part, to protect against embedded URLs \item \verb;/; automatically added to further protect against embedded URLs \end{itemize} When you whitelist an entry make sure you check that both domains are owned by the same entity. What this whitelist entry allows is: Links claiming to point to amazon.com (\textsc{DisplayedURL}), but really go to country-specific domain of amazon (\textsc{RealURL}). \subsection{Example for how the URL extractor works} Consider the following HTML file: \begin{verbatim} 1.displayedurl.example.com 2 di |
A Delta 727 traveled 2520 miles with the wind in 4.5 hours and 2160 miles against the wind in the same amount of time. How do you find the speed of the plane in still air and the speed of the wind?
Sep 26, 2015
$40 M p h$
Explanation:
Let the speed of plane in still air be x mph and speed of wind be y mph
WITH THE WIND
$S p e e d = x + y = \frac{2520}{4.5} = 560 \ldots \ldots . . \left(A\right)$
AGAINST THE WIND
$S p e e d = x - y = \frac{2160}{4.5} = 480 \ldots \ldots . . \left(B\right)$
$\left(A\right) + \left(B\right)$
$2 x = 1040$
$x = 520 m p h$
$y = 40 m p h$ ANSWER
CHECK
$\frac{2520}{560} = 4.5 h o u r s$
$\frac{2160}{480} = 4.5 h o u r s$ |
Is it true that Maxwell equations are interpreted by taking right side of formula as the "origin" and the left part as "consequence"?
When books or various references interpret the meaning of Maxwell equations, they typically state that the source (origin of the phenomena) is the right part of the formula, and the resulting effect is on the left part of the formula.
For example, for Maxwell-Faraday law, $$\vec{\nabla} \times \vec{E}=-\frac{\partial \vec{B}}{\partial t}$$ one states
"a time varying magnetic field creates ("induces") an electric field." (see for example : https://en.wikipedia.org/wiki/Maxwell%27s_equations#Faraday's_law )
It seems to me that this is not true. One could interpret in both direction.
For the example above, we could also state that a change of direction of the electric field will create a temporal change of the magnetic field.
Is it true that Maxwell equations should be interpreted by taking right side of formula as the "origin" and the left part as "consequence"? Or could we take also the left side as the origin?
You are basically correct but I think I can elucidate existing answers by pointing out that there are two issues here: a physical one and a mathematical one.
Maxwell's equations are making both mathematical and physical statements. The relationship between the left hand side and the right hand side is not a cause-effect relationship. But when we use the equations to find out how the field at any given place comes about, then we do find a cause-effect relationship: the field at any given place can be expressed in terms of the charge density and current on the surface of the past light cone of that event.
Mathematically the Maxwell equations have the form of differential equations. Looking at the first one, we would normally regard it as telling us something about the electric field if the charge density is given, but you can equally well regard it as telling us the charge density if the electric field is given. The difference between these two perspectives is that the second is mathematically not challenging and does not require any great analysis: if $$\bf E$$ is known then to find $$\rho$$ you just do some differentiation and a multiplication by a constant: all fairly simple. But if you have a known charge density and want to find the electric field, you have a lot more work to do, and indeed the problem cannot be solved at all unless you know quite a lot: to get the electric field at one point you need to know the charge density and current on the entire past light cone. Since this calculation is harder it earns some mathematical respect and there is terminology associated with it. We say (mathematically speaking) that $$\rho$$ is a 'source term' in a differential equation for $$\bf E$$. This is somewhat reminiscent of cause and effect but strictly speaking it is only indirectly related to cause and effect as I already said.
• "the field at any given place can be expressed in terms of the charge density and current on the surface of the past light cone of that event." Maybe I am missing something, but what about free electromagnetic field? Or do you assume something like Sommerfeld radiation condition? Aug 20 '21 at 8:15
• @akhmeteli thanks: a helpful comment. I don't think one can tell what boundary condition at past infinity ought to be taken, but you are correct to point out that if there are already propagating fields in the distant past then one might have simply to take that as part of a boundary condition. Aug 27 '21 at 16:34
Maxwell's equations, in some sense, are better understood by looking at them as two sets of equations. The dynamical variables in classical electrodynamics are the fields. We can thus group the equations such that there are two homogenous PDEs for the dynamical variables - the fields. These are:
$$\nabla \times \vec{E} + \frac{\partial \vec{B}}{\partial t} = 0 $$
$$\nabla \cdot \vec{B} = 0$$
The above equations are generally valid.
Additionally we have the following two equations:
$$\nabla \cdot \vec{E} = \frac{\rho}{\epsilon_0}$$ $$\nabla \times \vec{B} = \mu_0\left(J + \epsilon_0\frac{\partial \vec{E}}{\partial t}\right)$$
In these equations, the RHS are normally said to contain the 'source' terms. Why? The presence of a (static) charge configuration generates an electric field. The presence of a current density or a time varying electric field, can generate a magnetic field. In that sense, the terms on the right generate the fields and their dynamics.
Of course, in the context of boundary value problems and so on, one can have electric fields in a charge free region. But the idea is that there are fundamental physical objects such as charges and currents which generate the dynamical variables of electrodynamics.
Interestingly, in a relativistically covariant formalism of electrodynamics, the four Maxwell equations reduce to two tensor equations, one which contains the first two homogenous equations, and the other which contains the second two equations containing the sources.
• In the "geometric algebra" formalism, they reduce to a single, extremely elegant equation: youtube.com/watch?v=60z_hpEAtD8 Aug 19 '21 at 19:15
• @BlueRaja-DannyPflughoeft But an elegant form canalso be achieved with fourvectors, the EM tensor and the Hodge dual dF=0, d*F=J. en.wikipedia.org/wiki/… Aug 21 '21 at 11:39
You are correct. Maxwell-Faraday states that $$\vec{\nabla} \times \vec{E}$$ is the same thing as $$-\frac{\partial \vec{B}}{\partial t}$$. Both quantities express the same phenomenon. There is no cause and effect in either direction.
Once you express the fields in terms of derivatives of the vector potential the two expressions become identical. As the Coulomb potential is rotation free we can set it to zero for this excercise. Then $$\vec{E} = -\partial_t \vec{A}$$ and since $$\vec{B} = \vec{\nabla} \times \vec{A}$$ both sides of the Maxwell-Faraday are seen to be identical. In this sense Maxwell-Faraday expresses the fact that the fundamental quantity of electromagnetism is the vector potential.
Is it true that Maxwell equations should be interpreted by taking right side of formula as the "origin" and the left part as "consequence" ?
No. If you want to describe something as a consequence then it must happen later than the thing that it is a consequence of. This relationship is usually called cause and effect or causality. Maxwell’s equations do not express a cause and effect relationship.
In electromagnetism the equation that describes the cause of electromagnetic fields is called Jefimenko’s equations: https://en.m.wikipedia.org/wiki/Jefimenko's_equations
Note that these equations do describe a true cause and effect relationship since the right hand side happens at the so-called retarded time, which is earlier than the left hand side. The causes of the fields are the charges and currents, not the other fields.
• I believe the reason that we think of cause-action relationships as that the action must happen after than its cause is that we perceive time as running forward. I don’t see why relativistic cause-action relationships of our physical universe, such as a magnet field corresponding to an electric field, should also obey this rule as time plays a more fundamental role here, possibly being orthogonal to our perception of „before“ and „after“. Therefore I wouldn’t take „No“ as a general answer to the question. Aug 19 '21 at 23:02
• @returntrue Maxwell’s equations are fully relativistic, so my answer applies perfectly fine considering all relativistic effects
– Dale
Aug 20 '21 at 0:00
• I tried to say that a cause-action effect does not necessarily need the action to occur after the cause, especially in a relativistic setting. So my point stands. I only object to your answer's first two sentences. Aug 20 '21 at 8:46
• @returntrue it definitely does. In fact, in relativity the requirement is stronger, not weaker. In relativity the effect must occur in the future light cone of the cause, which is not only after the cause, but also close enough to the cause that light or something slower than light could travel from the cause to the effect. The first two sentences hold in relativity.
– Dale
Aug 20 '21 at 11:14
• That makes sense! Aug 20 '21 at 15:49
Although the answer to your main question is no, there is a causal relationship contained within Maxwell's equations through the partial derivatives wrt time terms. For example, you can find the 'effect' $$\vec{B} (t+dt)$$ in the future from the 'causes' in the present $$\vec{B}(t)$$ and $${\nabla} \times \vec{E}(t)$$ via:
$$\vec{B}(t+dt)= \vec{B(t)}-dt{\nabla} \times \vec{E(t)}$$
There are other ways of modelling CED causality such as via the Lienard-Wiechert fields of a moving charge, or through Jefimenko's equations; but nevertheless Maxwell's equations are causal if you look in the right place. |