prompt
stringlengths 12
7.84k
| context
stringclasses 0
values | response
list | response-suggestion
stringlengths 15
17.7k
| response-suggestion-metadata
dict | external_id
stringclasses 0
values | metadata
stringlengths 113
129
|
---|---|---|---|---|---|---|
Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof; or abridging the freedom of speech, or of the press; or the right of the people peaceably to assemble, and to petition the Government for a redress of grievances.
I don't understand the part marked in bold.
Congress shall make no law prohibiting the free exercise of religion. So congress should not make a law which prohibits the freedom of religion. I get it.
But Congress shall make a law which respects an establishment of religion. Doesn't "Congress shall make no law respecting an establishment of religion" mean congress should not make a law that respects religion because there's "no" in it?? | null | [] | The confusing term seems to be "respecting". This is a somewhat different meaning of "respect", that is still in common use:
respecting (prep):
* in view of : considering
* with respect to : concerning
The first amendment states that Congress shall pass no law related to (or with regard to) the establishment of a (state) religion. This has been interpreted by the courts to have many consequent effects, mainly that no governing body in the country can write law supporting any one religion over another religion. More information
The term "respecting" is mostly limited to legal documents, or formal essays/articles. Examples:
Respecting the salaries of the governors of states, the constitution made no provision.
The speculations of the fathers respecting the origin and course of the world seek to combine Christian ideas of the Deity with doctrines of Greek philosophy.
Authentic information respecting the great valley of the Ganges was supplied by Megasthenes, an ambassador sent by Seleucus, who reached the remote city of Patali-putra, the modern Patna.
"Respecting" can be used to mean "having/showing respect for", but most commonly in the related term "self-respecting"
No self-respecting gentleman would go outside on a sunny English day without an umbrella.
In most cases, if you want to say "having respect for" use that, the verb "to respect", or the adjective "respectful (of)".
He was respectful of her concerns, and did not press her for an immediate decision.
Nor did the clergy much respect the official standards of the Church.
They had little respect for tradition, flaunting contemporary standards to produce art intended to shock the viewer.
There is also the related term "respective":
respective (adj): belonging or relating to each one of the people or things that have been mentioned, not the same or shared : separate
This is commonly used to talk about things that are similar but separately related to some other similar set of things:
As the sun set, each of the children hurried to their respective homes.
The chronic controversies between the courts of common law and the Admiralty Court as to the limits of their respective jurisdictions reached an acute stage.
As Flater's comment mentions, the term "irrespective (of)" implies the opposite of "respecting": not taking (something) into account; regardless of.
Irrespective of their political differences, the candidates agreed that the economy was the most important issue facing the country. | {
"type": null,
"score": null,
"agent": null
} | null | {"n_characters": 599, "passed_quality_check": "False", "flesch_reading_ease": 63.81951219512196, "entropy": 4.512851594345303} |
Why are people unwilling to share information about their salary? | null | [] | People are unwilling to talk openly about their salaries because there is a social taboo surrounding the practice.
Like all taboos, there isn't necessarily a good rationale behind its existence. And the taboo can, in fact, be detrimental, as it is here.
Around the time of the industrial revolution, you were not allowed to talk about your wages. You could be fired if you did. Employers did this because discussing what you make is an obvious and required step towards collective bargaining.
If you discuss your wages, someone will find out they make less money than they should. If they make less money than they should, they will want more money. The people making money from underpaying people don't want to pay more. Your ignorance is a very useful tool in depressing your wages or the wages of others -- especially minorities or women -- around you.
So people learned from the early industrialists of old that they would be fired if it was found out they talked money. Those people told others of what had happened to them, that they had been fired for talking money. It snowballed from there. They spread that lesson on to others. So now, even though discussing your salary is a legally protected right, people are hesitant to exercise that right.
Not discussing salary is an abusive practice. It developed as a historical artefact from the abuse of the rich and powerful in the beginning of the industrial revolution that has wormed its way deep into society and is still pushed by the same abusers today. | {
"type": null,
"score": null,
"agent": null
} | null | {"n_characters": 56, "passed_quality_check": "True", "flesch_reading_ease": 69.78500000000001, "entropy": 0.18574960364285636} |
In D&D, how many attacks can a fighter with two-weapon fighting make at 5th level? | null | [] | Answer: 3 attacks on your turn (if you don't use Action Surge). (It could go up to five with Action Surge, though.)
The Two-Weapon Fighting rule states (PHB, page 195):
>When you take the Attack action and attack with a light melee weapon that you’re holding in one hand, you can use a bonus action to attack with a different light melee weapon that you’re holding in the other hand. You don’t add your ability modifier to the damage of the bonus attack, unless that modifier is negative. If either weapon has the thrown property, you can throw the weapon, instead of making a melee attack with it.
The bonus action used as part of Two-Weapon Fighting is an attack, but it is not the Attack action.
When you are a 5th-level fighter and you take the Attack action, you can make two attacks. You can also use your bonus action to make an additional attack. Notably, you can only take one bonus action per turn, since the rules for bonus actions state:
You can take only one bonus action on your turn, so you must choose which bonus action to use when you have more than one available.
You only add your ability modifier to the damage of that bonus-action attack if you have the Two-Weapon Fighting fighting style (PHB, page 72):
When you engage in two-weapon fighting, you can add your ability
modifier to the damage of the second attack.
Therefore, you are taking the Attack action (2 attacks) and then making an additional attack with your bonus action (1 attack), for a total of 3 attacks if you're using your action plus your bonus action. There's a clear difference between them.
Action Surge lets a 5th-level fighter attack up to 5 times in one turn, once per short rest.
The description of the Action Surge feature says (PHB, page 72):
Starting at 2nd level, you can push yourself beyond your normal limits for a moment. On your turn, you can take one additional action. Once you use this feature, you must finish a short or long rest before you can use it again. Starting at 17th level, you can use it twice before a rest, but only once on the same turn.
Which means that if you use Action Surge, you would have 2 actions and 1 bonus action on your turn.
So with this, you can take your Attack action (2 attacks), then use Action Surge and take another Attack action (2 more attacks), and finally your Two-Weapon Fighting bonus action (one more attack), for a total of 5 attacks during your turn. | {
"type": null,
"score": null,
"agent": null
} | null | {"n_characters": 68, "passed_quality_check": "True", "flesch_reading_ease": 90.13250000000001, "entropy": 0.33938601912632027} |
Consider the following code:
```avgDists = np.array([1, 8, 6, 9, 4])
ids = avgDists.argsort()[:n]
```
This gives me indices of the ```n``` smallest elements. Is it possible to use this same ```argsort``` in descending order to get the indices of ```n``` highest elements? | null | [] | If you negate an array, the lowest elements become the highest elements and vice-versa. Therefore, the indices of the `n` highest elements are:
```
(-avgDists).argsort()[:n]
```
Another way to reason about this, as mentioned in the comments, is to observe that the big elements are coming last in the argsort. So, you can read from the tail of the argsort to find the `n` highest elements:
```
avgDists.argsort()[::-1][:n]
```
Both methods are O(n log n) in time complexity, because the argsort call is the dominant term here. But the second approach has a nice advantage: it replaces an O(n) negation of the array with an O(1) slice. If you're working with small arrays inside loops then you may get some performance gains from avoiding that negation, and if you're working with huge arrays then you can save on memory usage because the negation creates a copy of the entire array.
Note that these methods do not always give equivalent results: if a stable sort implementation is requested to argsort, e.g. by passing the keyword argument kind='mergesort', then the first strategy will preserve the sorting stability, but the second strategy will break stability (i.e. the positions of equal items will get reversed). | {
"type": null,
"score": null,
"agent": null
} | null | {"n_characters": 233, "passed_quality_check": "False", "flesch_reading_ease": 67.37285714285714, "entropy": 1.5339227822747477} |
Can a woman give birth to twins with different fathers? | null | [] | Yes, this is possible through something called heteropaternal superfecundation (see below for further explanation).
Of all twin births, 30% are identical and 70% are non-identical (fraternal) twins.
Identical twins result when a zygote (one egg, or ovum, fertilized by one sperm) splits at an early stage to become twins. Because the genetic material is essentially the same, they resemble each other closely.
Typically during ovulation only one ovum is released to be fertilized by one sperm. However, sometimes a woman's ovaries release two ova. Each must be fertilized by a separate sperm cell. If she has intercourse with two different men, the two ova can be fertilized by sperm from different sexual partners. The term for this event is heteropaternal superfecundation (HS): twins who have the same mother, but two different fathers.
This has been proven in paternity suits (in which there will be a bias selecting for possible infidelity) involving fraternal twins, where genetic testing must be done on each child. The frequency of heteropaternal superfecundation in this group was found (in one study) to be 2.4%. As the study's authors state, "Inferences about the frequency of HS in other populations should be drawn with caution." | {
"type": null,
"score": null,
"agent": null
} | null | {"n_characters": 46, "passed_quality_check": "True", "flesch_reading_ease": 78.24500000000002, "entropy": 0.22900879274535335} |
Relationship between SVD and PCA. How to use SVD to perform PCA? | null | [] | Let the real values data matrix $\mathbf X$ be of $n \times p$ size, where $n$ is the number of samples and $p$ is the number of variables. Let us assume that it is centered, i.e. column means have been subtracted and are now equal to zero.
Then the $p \times p$ covariance matrix $\mathbf C$ is given by $\mathbf C = \mathbf X^\top \mathbf X/(n-1)$. It is a symmetric matrix and so it can be diagonalized: $$\mathbf C = \mathbf V \mathbf L \mathbf V^\top,$$ where $\mathbf V$ is a matrix of eigenvectors (each column is an eigenvector) and $\mathbf L$ is a diagonal matrix with eigenvalues $\lambda_i$ in the decreasing order on the diagonal. The eigenvectors are called principal axes or principal directions of the data. Projections of the data on the principal axes are called principal components, also known as PC scores; these can be seen as new, transformed, variables. The $j$-th principal component is given by $j$-th column of $\mathbf {XV}$. The coordinates of the $i$-th data point in the new PC space are given by the $i$-th row of $\mathbf{XV}$.
If we now perform singular value decomposition of $\mathbf X$, we obtain a decomposition $$\mathbf X = \mathbf U \mathbf S \mathbf V^\top,$$ where $\mathbf U$ is a unitary matrix (with columns called left singular vectors), $\mathbf S$ is the diagonal matrix of singular values $s_i$ and $\mathbf V$ columns are called right singular vectors. From here one can easily see that $$\mathbf C = \mathbf V \mathbf S \mathbf U^\top \mathbf U \mathbf S \mathbf V^\top /(n-1) = \mathbf V \frac{\mathbf S^2}{n-1}\mathbf V^\top,$$ meaning that right singular vectors $\mathbf V$ are principal directions (eigenvectors) and that singular values are related to the eigenvalues of covariance matrix via $\lambda_i = s_i^2/(n-1)$. Principal components are given by $\mathbf X \mathbf V = \mathbf U \mathbf S \mathbf V^\top \mathbf V = \mathbf U \mathbf S$.
To summarize:
1. If $\mathbf X = \mathbf U \mathbf S \mathbf V^\top$, then the columns of $\mathbf V$ are principal directions/axes (eigenvectors).
2. Columns of $\mathbf {US}$ are principal components ("scores").
3. Singular values are related to the eigenvalues of covariance matrix via $\lambda_i = s_i^2/(n-1)$. Eigenvalues $\lambda_i$ show variances of the respective PCs.
4. Standardized scores are given by columns of $\sqrt{n-1}\mathbf U$ and loadings are given by columns of $\mathbf V \mathbf S/\sqrt{n-1}$. See e.g. here and here for why "loadings" should not be confused with principal directions.
5. The above is correct only if $\mathbf X$ is centered. Only then is covariance matrix equal to $\mathbf X^\top \mathbf X/(n-1)$.
6. The above is correct only for $\mathbf X$ having samples in rows and variables in columns. If variables are in rows and samples in columns, then $\mathbf U$ and $\mathbf V$ exchange interpretations.
7. If one wants to perform PCA on a correlation matrix (instead of a covariance matrix), then columns of $\mathbf X$ should not only be centered, but standardized as well, i.e. divided by their standard deviations.
8. To reduce the dimensionality of the data from $p$ to $k<p$, select $k$ first columns of $\mathbf U$, and $k\times k$ upper-left part of $\mathbf S$. Their product $\mathbf U_k \mathbf S_k$ is the required $n \times k$ matrix containing first $k$ PCs.
9. Further multiplying the first $k$ PCs by the corresponding principal axes $\mathbf V_k^\top$ yields $\mathbf X_k = \mathbf U_k^\vphantom \top \mathbf S_k^\vphantom \top \mathbf V_k^\top$ matrix that has the original $n \times p$ size but is of lower rank (of rank $k$). This matrix $\mathbf X_k$ provides a reconstruction of the original data from the first $k$ PCs. It has the lowest possible reconstruction error, see my answer here.
10. Strictly speaking, $\mathbf U$ is of $n\times n$ size and $\mathbf V$ is of $p \times p$ size. However, if $n>p$ then the last $n-p$ columns of $\mathbf U$ are arbitrary (and corresponding rows of $\mathbf S$ are constant zero); one should therefore use an economy size (or thin) SVD that returns $\mathbf U$ of $n\times p$ size, dropping the useless columns. For large $n\gg p$ the matrix $\mathbf U$ would otherwise be unnecessarily huge. The same applies for an opposite situation of $n\ll p$. | {
"type": null,
"score": null,
"agent": null
} | null | {"n_characters": 53, "passed_quality_check": "True", "flesch_reading_ease": 87.94500000000002, "entropy": 0.41686633549946683} |
What is the English term for when someone thinks they are doing something nice for you but it ends up making things worse. EX: Someone buys you an elephant -- nice gesture and cool! But now you have to take care of it, and it becomes a burden on you. | null | [] | In fact, the English expression for a burdensome gift is literally white elephant:
> a thing that is useless and no longer needed, although it may have cost a lot of money [OALD]
So-called white elephants, or albino elephants, are found in many parts of South and Southeast Asia. In Buddhist countries they may be venerated as Queen Maya, mother of the Buddha, was said to have been visited in a dream by a white elephant holding a white lotus flower, and Siddharth Gautama entered his mother's womb in the form a white elephant. The white elephant is also associated with traits like mental strength and purity.
It became a royal symbol in Siam (Thailand); the king continues to keep white elephants. The story emerged that if a courtier displeased him, the king would make him a gift of a white elephant. The courtier could hardly decline a royal gift, and could hardly afford not to maintain a sacred animal, and could not put it to productive use, and so would be ruined by the cost of upkeep.
The earliest example of its use is from a 1721 essay in London Journal:
In short, Honour and Victory are generally no more than white Elephants; and for white Elephants the most destructive Wars have been often made.
A 2011 paper by Ross Bullen entitled “This Alarming Generosity”: White Elephants and the Logic of the Gift, in American Literature, covers the popularization of the term in the mid-19th century, presents an alternative account, that the story is a piece of orientalism and the white elephant rose as a literary trope. | {
"type": null,
"score": null,
"agent": null
} | null | {"n_characters": 201, "passed_quality_check": "True", "flesch_reading_ease": 95.98900510204084, "entropy": 1.6385418951261268} |
Did Hillary Clinton propose to punish flag burners in 2005? | null | [] | The proposed law co-sponsored by Clinton in 2005 prohibits flag burning under specific circumstances. It does not attempt to make flag burning illegal in general (i.e. outside of those specific circumstances). Clinton has voted against a proposed amendment to the Constitution that would allow Congress to ban flag burning in general.
In 2005, Clinton co-sponsored the Flag Protection Act of 2005, whose summary is as follows:
> Flag Protection Act of 2005 - Amends the federal criminal code to revise provisions regarding desecration of the flag to prohibit: (1) destroying or damaging a U.S. flag with the primary purpose and intent to incite or produce imminent violence or a breach of the peace; (2) intentionally threatening or intimidating any person, or group of persons, by burning a U.S. flag; or (3) stealing or knowingly converting the use of a U.S. flag belonging to the United States, or belonging to another person on U.S. lands, and intentionally destroying or damaging that flag.
It seems like a reaction to ongoing efforts to pass a constitutional amendment allowing Congress to ban flag burning, a move that the co-sponsors of the 2005 act oppose and voted against. Its text notes:
> the Bill of Rights is a guarantee of those freedoms and should not be amended in a manner that could be interpreted to restrict freedom, a course that is regularly resorted to by authoritarian governments which fear freedom and not by free and democratic nations
but suggests that flag burning with intent to incite violence is not protected by the Constitution (unlike flag burning as political expression, which is protected):
> destruction of the flag of the United States can be intended to incite a violent response rather than make a political statement and such conduct is outside the protections afforded by the first amendment to the Constitution.
Note that the distinction between
* banning flag burning for being offensive, and
* banning flag burning when it incites violence or disturbs the peace
is an important one. The Flag Protection Act proposed in 2005 was essentially a replacement for a previous Flag Protection Act that banned flag burning in general:
> Whoever knowingly mutilates, defaces, physically defiles, burns, maintains on the floor or ground, or tramples upon any flag of the United States shall be fined under this title or imprisoned for not more than one year, or both.
but this was deemed unconstitutional in United States v. Eichman, 496 U.S. 310 in 1990, on the basis of their earlier (1989) decision in Texas v. Johnson, 491 U.S. 397. There, the Supreme Court noted that
> an important governmental interest in regulating nonspeech can justify incidental limitations on First Amendment freedoms when speech and nonspeech elements are combined in the same course of conduct
but that in the case under consideration, this did not apply:
> An interest in preventing breaches of the peace is not implicated on this record.
and further noted that
> Expression may not be prohibited [p398] on the basis that an audience that takes serious offense to the expression may disturb the peace, since the Government cannot assume that every expression of a provocative idea will incite a riot, but must look to the actual circumstances surrounding the expression.
Clinton was not one of the 59 co-sponsors of a proposed amendment to the constitution that would allow Congress to ban flag burning in general:
> Proposing an amendment to the Constitution of the United States authorizing Congress to prohibit the physical desecration of the flag of the United States.
and she also voted against it. The proposed amendment failed to pass by one vote. | {
"type": null,
"score": null,
"agent": null
} | null | {"n_characters": 50, "passed_quality_check": "True", "flesch_reading_ease": 69.78500000000001, "entropy": 0.16132621045919077} |
There are several questions here about the definition of depth of field, about focal length, and about subject distance. And of course there's the basic how does aperture affect my photographs. And plenty of how do I get super-shallow d.o.f questions. There's related questions like this one. But there's no be-all-end-all question asking:
What exactly determines depth of field in a photograph?
Is it just a property of the lens? Can lenses be designed to give more depth of field for the same aperture and focal length? Does it change with camera sensor size? Does it change with print size? How do those last two relate? | null | [] | I'm going to dispense with the formulas, photos of rulers and definitions of "magnification" and go with what you actually experience in practice. The major factors that actually matter to shooting are:
* Aperture. Wide aperture lenses give you a shallower depth of field. This is probably the least controversial factor! This is important as some lenses have much larger apertures e.g. 18-55 f/3.5-5.6 vs. 50 f/1.8
* Subject distance. This is a really important consideration. Depth of field gets drastically shallower when you start to get really close. This is important as at macro focussing distances DoF is a major problem. It also means you can get shallow DoF regardless of aperture if you get close enough, and that if you want deep DoF in low light compose to focus further away.
* Focal length. This does affect depth of field, but only in certain ranges, when maintaining subject size. Wide lenses have very deep depth of field at most subject distances. Once you get past a certain point, DoF changes very little with focal length. This is important again because if you want to increase / decrease DoF you can use focal length to do this whilst still filling the frame with your subject.
* Sensor size. This affects DoF when you maintain the same subject distance and field of view between sensor sizes. The bigger the sensor the shallower the depth of field. DSLRs have much bigger sensors than compacts, and so for the same FoV and f-ratio they have shallower DoF. This is important because by the same token cropping images increases DoF when maintaining the same final output size, as it's akin to using a smaller sensor.
| {
"type": null,
"score": null,
"agent": null
} | null | {"n_characters": 520, "passed_quality_check": "True", "flesch_reading_ease": 87.4509285714286, "entropy": 3.3423334173544443} |
Why did the NES not allow rotated sprites? | null | [] | For each sprite displayed on a scanline, the hardware fetches two bytes from memory, and then clocks the pixels out one by one. The sprite is eight pixels wide, and each pixel is two bits, which is why it's two memory accesses per sprite per scanline. You can imagine that this arrangement just needs a couple of shift registers to clock the pixels out.
Now, flipping the sprite about its vertical axis is easy: you just clock the pixels out in reverse order! Similarly, flipping the sprite about its horizontal axis is also easy: You just fetch the bitmap from memory in reverse order. Rotating by 180° of course is the same as flipping both horizontally and vertically.
But if you wanted to rotate the sprite 90° or 270°, that's much harder. That's because the hardware would need to get one pixel from each pair of bytes, which means fetching 16 bytes from memory instead of just two. There is not enough time on this slow hardware to do that. Incidentally, this is also where the maximum number of sprites per scanline limitation comes from.
It's a similar story on the Commodore 64, the Atari 2600, and many others: These platforms can do some simple 2D manipulations on sprites like stretching and sometimes flipping like you've seen, but never rotations. | {
"type": null,
"score": null,
"agent": null
} | null | {"n_characters": 35, "passed_quality_check": "False", "flesch_reading_ease": 92.965, "entropy": 0.17556194457927807} |
I want to merge two dictionaries into a new dictionary.
```
x = {'a': 1, 'b': 2}
y = {'b': 3, 'c': 4}
z = merge(x, y)
>>> z
{'a': 1, 'b': 3, 'c': 4}
``` | null | [] | Here is what you can do for the case you described:
```
z = dict(list(x.items()) + list(y.items()))
```
This will, as you want it, put the final dict in ```z```, and make the value for key ```b``` be properly overridden by the second (```y```) dict's value:
```>>> x = {'a': 1, 'b': 2}
>>> y = {'b': 10, 'c': 11}
>>> z = dict(list(x.items()) + list(y.items()))
>>> z
{'a': 1, 'c': 11, 'b': 10}
```
If you use Python 2, you can even remove the ```list()``` calls. To create z:
```>>> z = dict(x.items() + y.items())
>>> z
{'a': 1, 'c': 11, 'b': 10}
```
If you use Python version 3.9.0a4 or greater, you can directly use:
```>>> x = {'a': 1, 'b': 2}
>>> y = {'b': 10, 'c': 11}
>>> z = x | y
>>> z
{'a': 1, 'c': 11, 'b': 10}
``` | {
"type": null,
"score": null,
"agent": null
} | null | {"n_characters": 134, "passed_quality_check": "False", "flesch_reading_ease": 87.58250000000002, "entropy": 1.5202252307969666} |
There are many "standards" for the JSON content type:
application/json
application/x-javascript
text/javascript
text/x-javascript
text/x-json
Which one do I use, and where? I assume security and browser support issues are a factor. | null | [] | According to RFC 4627:
The MIME media type for JSON text is application/json. The default encoding is UTF-8.
For JSON text, you should use:
```
application/json
```
For JSONP (runnable JavaScript) with callback, you should use:
```
application/javascript
``` | {
"type": null,
"score": null,
"agent": null
} | null | {"n_characters": 207, "passed_quality_check": "False", "flesch_reading_ease": 69.31750000000001, "entropy": 1.1771266428791443} |
What's the goal of Minecraft and what can users achieve in this world? | null | [] | The thing about Minecraft is, unlike most games you set your own goals. Here's a list of common (and uncommon!) things people have done with Minecraft:
* Building a massive structure
* Recreating real-world objects
* Survival (Deleting map upon dying)
* Making amusing videos
* Creating an adventure map for others to play
* Getting all the achievements
* Proposing to your girlfriend
* Achieving difficult things in the game through ingenuity
* Computation
* Blowing stuff up!
The mechanism that keeps the game going is your own desire to complete the goals you've set yourself, and pure, red-hot addiction.
Some additional stuff people have done:
* Walking and walking and walking (thanks SevenSidedDie)
* Build an epic railway
* Make music
* Find the Far Lands
Minecraft 1.9+
In Minecraft 1.9 and beyond, there are villages, strongholds, abandoned mineshafts and nether ruins to explore. In particular, strongholds will contain portals to The End, in which players can fight The Ender Dragon, a 'boss fights'. The second 'boss fight' has also been added, The Wither. This makes exploration a key part of the game, as well as combat. 'Hardcore Mode' (world deletion upon death) is also an option if you're after a challenge.
A creative mode has also been added that allows players complete freedom in constructing large structures, compartmentalizing that aspect of gameplay. | {
"type": null,
"score": null,
"agent": null
} | null | {"n_characters": 58, "passed_quality_check": "True", "flesch_reading_ease": 102.53230769230771, "entropy": 0.41319838892138294} |
If I have sampled a signal using proper sampling methods (Nyquist, filtering, etc) how do I relate the length of my FFT to the resulting frequency resolution I can obtain?
Like if I have a 2,000 Hz and 1,999 Hz sine wave, how would I determine the length of FFT needed to accurately tell the difference between those two waves? | null | [] | The frequency resolution is dependent on the relationship between the FFT length and the sampling rate of the input signal.
If we collect 8192 samples for the FFT then we will have:
$$\frac{8192\ \text{samples}}{2} = 4096\ \,\text{FFT bins}$$
If our sampling rate is 10 kHz, then the Nyquist-Shannon sampling theorem says that our signal can contain frequency content up to 5 kHz. Then, our frequency bin resolution is:
$$\frac{5\ \text{kHz}}{4096\ \,\text{FFT bins}} \simeq \frac{1.22\ \text{Hz}}{\text{bin}}$$
This is may be the easier way to explain it conceptually but simplified: your bin resolution is just \$\frac{f_{samp}}{N}\$, where \$f_{samp}\$ is the input signal's sampling rate and N is the number of FFT points used (sample length).
We can see from the above that to get smaller FFT bins we can either run a longer FFT (that is, take more samples at the same rate before running the FFT) or decrease our sampling rate.
## The Catch:
There is always a trade-off between temporal resolution and frequency resolution.
In the example above, we need to collect 8192 samples before we can run the FFT, which when sampling at 10 kHz takes 0.82 seconds.
If we tried to get smaller FFT bins by running a longer FFT it would take even longer to collect the needed samples.
That may be OK, it may not be. The important point is that at a fixed sampling rate, increasing frequency resolution decreases temporal resolution. That is the more accurate your measurement in the frequency domain, the less accurate you can be in the time domain. You effectively lose all time information inside the FFT length.
In this example, if a 1999 Hz tone starts and stops in the first half of the 8192 sample FFT and a 2002 Hz tone plays in the second half of the window, we would see both, but they would appear to have occurred at the same time.
You also have to consider processing time. A 8192 point FFT takes some decent processing power. A way to reduce this need is to reduce the sampling rate, which is the second way to increase frequency resolution.
In your example, if you drop your sampling rate to something like 4096 Hz, then you only need a 4096 point FFT to achieve 1 Hz bins *4096 Hz, then you only need a 4096 point FFT to achieve 1hz bins and can still resolve a 2khz signal. This reduces the FFT bin size, but also reduces the bandwidth of the signal.
Ultimately with an FFT there will always be a trade off between frequency resolution and time resolution. You have to perform a bit of a balancing act to reach all goals. | {
"type": null,
"score": null,
"agent": null
} | null | {"n_characters": 269, "passed_quality_check": "True", "flesch_reading_ease": 63.53979508196723, "entropy": 2.0183671868274806} |
I wanted to produce a 1 GB random file, so I used following command.
```dd if=/dev/urandom of=output bs=1G count=1
```
But instead every time I launch this command I get a 32 MB file:
```$ dd if=/dev/urandom of=output bs=1G count=1
0+1 records in
0+1 records out
33554431 bytes (34 MB, 32 MiB) copied, 0,288321 s, 116 MB/s
```
What is wrong? | null | [] | ```bs```, the buffer size, means the size of a single read() call done by dd. For example, both ```bs=1M count=1``` and ```bs=1k count=1k``` will result in a 1 MiB file, but the first version will do it in a single step, while the second will do it in 1024 small chunks.
Regular files can be read at nearly any buffer size (as long as that buffer fits in RAM), but devices and "virtual" files often work very close to the individual calls and have some arbitrary restriction of how much data they'll produce per read() call.
For ```/dev/urandom```, this limit is defined in urandom_read() in drivers/char/random.c:
```#define ENTROPY_SHIFT 3
static ssize_t
urandom_read(struct file *file, char __user *buf, size_t nbytes, loff_t *ppos)
{
nbytes = min_t(size_t, nbytes, INT_MAX >> (ENTROPY_SHIFT + 3));
...
}
```
This means that every time the function is called, it will clamp the requested size to 33554431 bytes.
By default, unlike most other tools, dd will not retry after receiving less data than requested – you get the 32 MiB and that's it. (To make it retry automatically, as in Kamil's answer, you'll need to specify ```iflag=fullblock```.)
Note also that "the size of a single read()" means that the whole buffer must fit in memory at once, so massive block sizes also correspond to massive memory usage by dd.
And it's all pointless because you usually won't gain any performance when going above ~16–32 MiB blocks – syscalls aren't the slow part here, the random number generator is.
So for simplicity, just use ```head -c 1G /dev/urandom > output```. | {
"type": null,
"score": null,
"agent": null
} | null | {"n_characters": 290, "passed_quality_check": "False", "flesch_reading_ease": 75.30548387096775, "entropy": 1.74019157550586} |
The treads on my road bike's 28C tires are almost completely gone—they only persist along the shoulders of the tire. Do the treads matter? What is a good indicator of when the tire as a whole needs to be replaced? | null | [] | The tread on a road bike's tires is really quite unimportant and purely cosmetic. Road bike tires have tread patterns because they sell better, not because they perform better.
Here's some questions to ask to decide about replacing a tire:
* Can you see any of the fabric?
* Are you getting flats more easily than you used to?
* Is any part of the tire bulging out or lumpy?
* Are there visible holes/gashes in the tire?
* Does the wear indicator indicate you should replace? (Some tires have dimples or grooves that are there to indicate when the tire should be replaced. Some have an alternate color of rubber underneath, and when that shows through the tire should be replaced)
Smooth tread is actually ideal for road conditions, but not commonly found on tires. If you're on hard surfaces such as pavement/asphalt, the tread doesn't help you and actually slightly reduces contact area and increases rolling resistance. On most road tires the minimal tread pattern is just cosmetic to make their customers feel better (since a smooth tire looks slippery).
See also: Sheldon Brown's comments on tire replacement and tread patterns
If you plan to ride through mud, sand or other soft surfaces, you need tread. So when you no longer have tread on your mountain bike or cyclocross bike's tires, it's time to replace them (unless you plan to only use them on the road). | {
"type": null,
"score": null,
"agent": null
} | null | {"n_characters": 174, "passed_quality_check": "True", "flesch_reading_ease": 91.94713178294576, "entropy": 1.3482736535006783} |
Is there a way to create an index on a property/column using fluent configuration, instead of using the new ```IndexAttribute``` ? | null | [] | Currently there is no "first class support" for creating a index via the fluent API, but what you can do is via the fluent API you can mark properties as having attributes from the Annotation API. This will allow you to add the ```Index``` attribute via a fluent interface.
Here are some examples from the work item from Issues site for EF.
Create a index on a single column:
```modelBuilder.Entity()
.Property(e => e.MyProperty)
.HasColumnAnnotation(
IndexAnnotation.AnnotationName,
new IndexAnnotation(new IndexAttribute()));
```
Multiple indexes on a single column:
```modelBuilder.Entity()
.Property(e => e.MyProperty)
.HasColumnAnnotation(
IndexAnnotation.AnnotationName,
new IndexAnnotation(new[]
{
new IndexAttribute("Index1"),
new IndexAttribute("Index2") { IsUnique = true }
}));
```
Multi-Column indexes:
```modelBuilder.Entity()
.Property(e => e.MyProperty1)
.HasColumnAnnotation(
IndexAnnotation.AnnotationName,
new IndexAnnotation(new IndexAttribute("MyIndex", 1)));
modelBuilder.Entity()
.Property(e => e.MyProperty2)
.HasColumnAnnotation(
IndexAnnotation.AnnotationName,
new IndexAnnotation(new IndexAttribute("MyIndex", 2)));
```
Using the above techniques will cause ```.CreateIndex()``` calls to be automatically created for you in your ```Up()``` function when you scaffold your next migration (or be automatically created in the database if you are not using migrations). | {
"type": null,
"score": null,
"agent": null
} | null | {"n_characters": 110, "passed_quality_check": "True", "flesch_reading_ease": 47.83000000000001, "entropy": 0.6245497893958546} |
Both races have the same limits on their strength, and athletics, but it's hard to imagine why. A Goliath could feasibly lift and throw a gnome, yet the Gnome, following rules as written, can pin down a Goliath, fairly easily, if statted correctly.
Is there an in-universe explanation as to why such dramatically different sized creatures can wrestle on an even playing field?
How might a DM explain a scenario in which a gnome beats a goliath in any kind of test of strength? | null | [] | D&D is a terrible reality simulator.
The problem that you're having here is that, in real life, most people who are 7 feet tall are going to be faster, stronger, and all-around more physically powerful than a person who is 3 feet tall. If you compare two people of similar athleticism, the person who is 4 feet taller is going to be able to physically dominate the other.
However, D&D is not a game built to simulate reality. It's a game built to let you play through stories about heroes who fight against a large variety of evil things. It's built to let players build a wide array of kinds of hero, and let them generally be effective at their role, even if the character options they picked wouldn't work in real life.
This means that the game will let you build a gnome strongman, if that's the kind of character you want to play. The designers wouldn't want your gnome strongman to be much worse at being a strongman than a character of a different race, so the only racial 'penalty' to being a strongman that you get is that you don't get a racial bonus to Strength. All of this is based on intentional design choices to make as many player archetypes as possible viable.
In-universe, they'd explain this just like we would in real life. If a really buff but physically small person wins, say, an arm-wrestling contest with someone pushing 7 feet, we'd be impressed. It would be odd, for sure, but it wouldn't be something impossible. In a world where divine blessings and magic are commonplace, the idea of a strong gnome beating a goliath in a contest of strength becomes less unbelievable, and more possible. | {
"type": null,
"score": null,
"agent": null
} | null | {"n_characters": 395, "passed_quality_check": "True", "flesch_reading_ease": 74.87599137931036, "entropy": 2.8442569599191523} |
So I'm pretty far into writing my dystopian novel and I was reading over what I had. Something that helps me when I first start a novel is to get a clear picture of my characters in my head and put a face to a name, so I usually sculpt a personality and find a Google image of someone who I think matches that, and I put all of those into documents for my personal reference. I looked over my main five characters--Analise, Poet, Shove, Star, and Nova--and then suddenly something jumped out at me. Analise is Hispanic, Shove is Japanese, and Poet, Star, and Nova are all black.
I had forgotten about their races because it wasn't important to me and I had not noticed while I was writing, because the story isn't about their racial backgrounds. But is it, I don't know, somehow alienating or offensive to white readers that the characters aren't white, and that no main characters are white? | null | [] | The answer I'll give you here is the same as the ones I've already given you and others: write what works for you. If these are who the characters are, then that's who they are. If you're forcing diversity, then it will come off as forced. That includes making some characters white just to be diverse.
Will you alienate or even offend some white readers? Yes.
But this is not the type of offense to worry about. Some people are so used to being in the mainstream everywhere and for everything, that they loudly protest when suddenly they're not. If they don't like your story, they can go literally anyplace else to find beautiful, stirring, authentic depictions of all sorts of white people. Even within works about people of color.
Some people will argue that this is exactly the same as novels only including white people. But, no. It isn't. Because representation isn't just about a single work. It's about the entirety of our culture. Americans (and most Westerners) find white people so central to their understanding of the universe that they insert them in places they might not otherwise be and tell entire stories set in nonwhite worlds from the white character's point of view. (I just watched The Last King of Scotland which does exactly this...they invented a white character for this very purpose...in a movie about real events in Uganda.)
Write the story that matters to you. | {
"type": null,
"score": null,
"agent": null
} | null | {"n_characters": 731, "passed_quality_check": "True", "flesch_reading_ease": 65.60223684210527, "entropy": 6.918270971844917} |
When I do ```\footnote{}``` for a value in a table, the footnote doesn't show up. How do I get it to show up? Also, is it possible to get it to show up at the bottom of the table rather than the bottom of the page? | null | [] | This is a classic difficulty in LaTeX.
The problem is how to do layout with floats (figures and tables, an similar objects) and footnotes. In particular, it is hard to pick a place for a float with certainty that making room for the associated footnotes won't cause trouble. So the standard ```tabular``` and ```figure``` environments don't even try.
What can you do:
* Fake it. Just put a hardcoded vertical skip at the bottom of the caption and then write the footnote yourself (use ```\footnotesize``` for the size). You also have to manage the symbols or number yourself with ```\footnotemark```. Simple, but not very attractive, and the footnote does not appear at the bottom of the page.
* Use the ```tabularx```, ```longtable```, ```threeparttable[x]``` (kudos to Joseph) or ```ctable``` which support this behavior.
* Manage it by hand. Use ```[h!]``` (or ```[H]``` with the float package) to control where the float will appear, and ```\footnotetext``` on the same page to put the footnote where you want it. Again, use ```\footnotemark``` to install the symbol. Fragile and requires hand-tooling every instance.
* The ```footnote``` package provides the ```savenote``` environment, which can be used to do this.
* Minipage it (code stolen outright, and read the disclaimer about long caption texts in that case):
\begin{figure}
\begin{minipage}{\textwidth}
...
\caption[Caption for LOF]%
{Real caption\footnote{blah}}
\end{minipage}
\end{figure}
Additional reference: TeX FAQ item Footnotes in tables. | {
"type": null,
"score": null,
"agent": null
} | null | {"n_characters": 169, "passed_quality_check": "False", "flesch_reading_ease": 96.5070512820513, "entropy": 2.1613556479431932} |
Why is kVA not the same as kW? | null | [] | Both watts and volt-amps come from the same equation, $P=IV$, but the difference is how they're measured.
To get volt-amps, you multiply root mean square (RMS) voltage ($V$) with RMS current ($I$) with no regard for the timing/phasing between them. This is what the wiring and pretty much all electrical/electronic components have to deal with.
To get watts, you multiply instantaneous voltage ($V$) with instantaneous current ($I$) for every sample, then average those results. This is the energy that is actually transferred.
Now to compare the two measurements:
If voltage and current are both sinewaves, then $\text{watts} = \text{volt-amps} \times \cos(\phi)$, where $\phi$ is the phase angle between voltage and current. It's pretty easy to see from this that if they're both sine waves and if they're in phase ($\phi = 0$), then $\text{watts} = \text{volt-amps}$.
However, if you're NOT dealing with sine waves, the $\cos(\phi)$ relationship no longer applies! So you have to go the long way around and actually do the measurements as described here.
How might that happen? Easy. DC power supplies. They're everywhere, including battery chargers, and the vast majority of them only draw current at the peak of the AC voltage waveform because that's the only time that their filter capacitors are otherwise less than the input voltage. So they draw a big spike of current to recharge the caps, starting just before the voltage peak and ending right at the voltage peak, and then they draw nothing until the next peak.
And of course there's an exception to this rule also, and that is Power Factor Correction (PFC). DC power supplies with PFC are specialized switching power supplies that end up producing more DC voltage than the highest AC peak, and they do it in such a way that their input current follows the input voltage almost exactly. Of course, this is only an approximation, but the goal is to get a close enough match that the $\cos(\phi)$ shortcut becomes acceptably close to accurate, with $\phi \approx 0$. Then, given this high voltage DC, a secondary switching supply produces what is actually required by the circuit being powered. | {
"type": null,
"score": null,
"agent": null
} | null | {"n_characters": 23, "passed_quality_check": "False", "flesch_reading_ease": 114.11500000000001, "entropy": 0.24356796642583864} |
Elon Musk and his partner want to name their child X Æ A-12.
Is that name allowed in California, US? | null | [] | We could start with what the statutes say (HSC 102425)
(a) The certificate of live birth for a live birth occurring on or after January 1, 2016, shall contain those items necessary to establish the fact of the birth and shall contain only the following information; (1) Full name and sex of the child.
It says nothing about the form of that name. Therefore, any prohibition of the letter Æ (or æ) etc. would have to come from the administrative interpretation of California Department of Public Health, Vital Records. There is a long handbook, which on p. 112 states the rule regarding child names. The rules note that
The form must be completed using the 26 alphabetical characters of the English language with appropriate punctuation, if necessary. No pictographs (), ideograms (), diacritical marks (è, ñ, ē, ç), or extraneous entries are allowed.
So the short answer is, unless you feel like making a court case out of the matter and you have a lot of money, this name will not be allowed. The rule might be challenged in court as exceeding statutory authority, and might well be deemed to discriminate w.r.t. race and national origin. The rule could be defended on grounds of necessity, if we presume that the department is incapable of recording information other than the 26 letters and "appropriate punctuation" (undefined, presumably only apostrophe). It's not that in principle Unicode doesn't exist, it's that their system may not be capable of dealing with it (numerous problems would arise from the non-unique mapping from visual representation to Unicode number). There does not seem to be a court ruling on the matter. | {
"type": null,
"score": null,
"agent": null
} | null | {"n_characters": 82, "passed_quality_check": "True", "flesch_reading_ease": 91.43464285714288, "entropy": 0.5834091295563932} |
In this Creation magazine reprint of a 1994 article titled Exploding stars point to a young universe, Young-Earth Creationist, Jonathan Sarfati argues that the scarcity of Supernova remnants (SNRs) in the sky suggests the Milky Way galaxy is less than billions of years old.
On average, a galaxy like our own, the Milky Way, should produce one supernova every 25 years.
[...]
As can be readily seen above, a young universe model fits the data of the low number of observed SNRs. If the universe was really billions of years old, there are 7000 missing SNRs in our galaxy.
Does astronomy predict a Milky Way supernova every 25 years? Are there missing SNRs that undermine these predictions? | null | [] | There are many reasons why this is wrong. The first one is the assumption of 1 supernova per 25 years. That was the value arrived at in Gustav Tammann's article published in 1970. Others got values up to 100 years (Tammann himself changed his value later). All these values are really only good guesses.
Worse errors are made in the percentage of remnants that should be visible. To start with, only a small percentage of them are visible; the others are obscured by dust in the galaxy. This does not happen to the same extent to the supernovas themselves, as they are intrinsically very much brighter.
Sarfati also uses an old paper to come up with the number of remnants in the Large Magellanic Cloud (a satellite galaxy to the Milky Way). However, that figure was considered to be wrong (and much smaller) in later studies. Older remnants are very difficult to distinguish against a background of dust in the galaxy. Many of them disappear completely after about 10,000 years or so.
It should be noted that, using Sarfati's own reasoning, we should not be able to see any remnants older than 6000 years. Instead, we know of remnants (e.g. G166.2 + 2.5) that are over 100,000 years old. How did those originate in his universe? The same goes for the distance to supernovas. The distance to SN 1987A has been trigonometrically measured at 167,000 light years (the paper says 51,200 parsecs). In other words, its light took more than 30 times longer to reach us than the creationist universe has supposedly existed.
Remember also that a supernova typically becomes either a pulsar or a black hole, both of which are very hard to observe. A black hole can only be "seen" through its effects on other matter, and a pulsar is mainly visible if the solar system happens to be in line with its sweeping radio beam (with some pulsars the star itself has also been seen). Hence, all we can see of them long after the event is an expanding cloud of gas that gets dimmer over the millennia.
There are several other reasons which are covered in a very good article on TalkOrigins. That article has links to all the original scientific papers referred to, so you can check the validity yourself. | {
"type": null,
"score": null,
"agent": null
} | null | {"n_characters": 579, "passed_quality_check": "True", "flesch_reading_ease": 74.55240437158473, "entropy": 3.7791903845552026} |
Why is there so much technical detail of whaling included in Moby-Dick? | null | [] | There are multiple purposes behind the novel structure of Moby-Dick.
In the first instance the author was himself a whaleman, having spent 18 months as an ordinary seaman aboard the whaler Acushnet in 1841–42. In his experience the literature of the day, such as Miriam
Coffin or The Whale-Fisherman by Joseph Hart, did not portray whaling accurately but rather sensationalised it. He wished to provide his readers with a more realistic account.
In doing so, he also gave the reader the information necessary to better understand and appreciate the passages of whaling described in the novel. With the reader forewarned, he is able to keep the narrative passages tense and exciting. It is interesting that, early in the novel, the Pequod meets virutally no whales and that the chases become longer and more involved as the story progresses. This may be because Melville felt he needed to give the reader more and more technical information before they could fully appreicate these encounters, but understood the science would need to be spaced out among the narrative.
In describing it to the reader, the science also serves another useful narrative function. It indicates that on whaling voyages, long periods of time pass when nothing much happens at all. By padding out of the story with scientific digressions serves as a useful narrative metaphor for waiting.
The level of detail provided also mirrors one of the novel's central themes: obsession. Although Ishmael is the narrator and Ahab the obvious obsessive, the amount of science indicates that Ishmael is also an obsessive. His obsession is the business of whaling itself.
Finally, Melville made good use of these passages in serving as metaphors for other themes of the novel where he felt it appropriate. For example in the chapter The Line, which describes how the whale line is stored and used in the boat, it ends by using the line as a metaphor.
"All men live enveloped in whale-lines. All are born with halters round their necks; but it is only when caught in the swift, sudden turn of death, that mortals realize the silent, subtle, ever-present perils of life."
| {
"type": null,
"score": null,
"agent": null
} | null | {"n_characters": 60, "passed_quality_check": "True", "flesch_reading_ease": 76.50153846153849, "entropy": 0.25497830318012427} |
Why are we building larger land-based telescopes instead of launching larger ones into space? | null | [] | Because it's cheaper.
(1) With adaptive optics you can get 0.1 arc second resolution on the ground (admittedly only on a mountain top with particularly good air flow, but still!). This eliminates one of the major advantages of space until you get above several meters mirror diameter.
(2) Rocket fairings are the shrouds which protect payloads during the supersonic atmospherics speeds reached during launch. A 5 meter fairing is about the largest that can be flown, which limits the size of the one-piece mirrors which can be launched. (The Dreaded Webb Telescope's mirror is in pieces which will assemble themselves in space -- a very scary and very expensive piece of design.)
(3) Servicing a telescope on the top of Mauna Kea or in the high Chilean Andes is a difficult and expensive process. Servicing a telescope in orbit makes that look like small change. (Cost comparable to the cost of building a new giant scope on Earth.) And in-orbit servicing can't even be done with current technology except in low earth orbit.
(4) While high resolution is one frontier in astronomy, going deep is another, and going deep requires big mirrors. A 30 meter mirror on Earth gathers much more light than a 5 meter mirror in space. The giant terrestrial telescopes simply do a better job of being light buckets for spectroscopy than anything we can yet put into space.
The bottom line is that with the development of adaptive optics, space-based telescopes of currently buildable and launchable size lost their main advantage over ground-based telescopes. And since they're 10x to 100x the cost, they are simply not worth building for many purposes.
Space based telescopes still hold a significant edge in parts of the spectrum blocked by the atmosphere such as UV and IR (Webb), and for certain tasks involving long-term high accuracy photometry (Kepler) and astrometry (Gaia). But for general purpose use, the balance seems firmly on the ground's side for large telescopes.
This will change if space flight becomes cheaper -- the SpaceX BFR, for example, with its 9 meter fairing and dramatically lower launch costs, offers great hope for space telescopes. | {
"type": null,
"score": null,
"agent": null
} | null | {"n_characters": 80, "passed_quality_check": "True", "flesch_reading_ease": 67.53000000000003, "entropy": 0.17265522699494942} |
Why can we see the dust particles in a narrow beam of light (and not in an all lighted area)? | null | [] | Your inability to see the dust until you narrow the slit has nothing to do with the narrowness of the beam but instead the dynamic range of light that your eye can see at one time.
A bit of searching turns up reports of a contrast ratio for you eye at one time as between 100:1 and 1000:1. This means if you're in a room with a range of brightness greater than about 100 to 1 the brightest things will all be washed out as white and the darkest things will all be essentially black. This is obvious in photos that are "backlit" like this one:
These horses aren't black but because the ratio of the bright light to the dark horses exceeds the dynamic range of the camera the sky is washed out white and the horses are in silhouette.
Your eye can adjust over time to a huge range but it can't utilize the whole range all at once.
In the case of dust reflecting light, if you allow a lot of light into the room the relative brightness between the small amount of light the dust is reflecting and the rest of the illuminated room prevent you from seeing the dust.
This is fundamental to signal processing. Why can't you hear a whisper in a noisy room? The noise of the crowd obscures the whisper. The difference between the signal you're trying to pick up and the background noise is called the signal-to-noise ratio. In the case of dust, the light let into the room is scattered and reflected in the room and causes the room to be illuminated. This is the noise that obscures the signal from light reflected off of the dust. | {
"type": null,
"score": null,
"agent": null
} | null | {"n_characters": 74, "passed_quality_check": "True", "flesch_reading_ease": 97.70500000000003, "entropy": 0.5770027460845712} |
I can not initialize a List as in the following code:
```List supplierNames = new List();
supplierNames.add("sup1");
supplierNames.add("sup2");
supplierNames.add("sup3");
System.out.println(supplierNames.get(1));
```
I face the following error:
Cannot instantiate the type ```List```
How can I instantiate ```List```? | null | [] | If you check the API for ```List``` you'll notice it says:
```Interface List
```
Being an ```interface``` means it cannot be instantiated (no ```new List()``` is possible).
If you check that link, you'll find some ```class```es that implement ```List```:
All Known Implementing Classes:
```AbstractList```, ```AbstractSequentialList```, ```ArrayList```, ```AttributeList```, ```CopyOnWriteArrayList```, ```LinkedList```, ```RoleList```, ```RoleUnresolvedList```, ```Stack```, ```Vector```
Some of those can be instantiated (the ones that are not defined as ```abstract class```). Use their links to know more about them, I.E: to know which fits better your needs.
The 3 most commonly used ones probably are:
``` List supplierNames1 = new ArrayList();
List supplierNames2 = new LinkedList();
List supplierNames3 = new Vector();
```
Bonus:
You can also instantiate it with values, in an easier way, using the ```Arrays``` ```class```, as follows:
```List supplierNames = Arrays.asList("sup1", "sup2", "sup3");
System.out.println(supplierNames.get(1));
```
But note you are not allowed to add more elements to that list, as it's ```fixed-size```. | {
"type": null,
"score": null,
"agent": null
} | null | {"n_characters": 293, "passed_quality_check": "False", "flesch_reading_ease": 9.947142857142893, "entropy": 1.3587329085344588} |
What is the difference between ```warnings.warn()``` and ```logging.warn()``` in terms of what they do and how they should be used? | null | [] | Here are the differences:
* ```logging.warning``` just logs something at the ```WARNING``` level, in the same way that ```logging.info``` logs at the ```INFO``` level and ```logging.error``` logs at the ```ERROR``` level. It has no special behaviour.
* ```warnings.warn``` emits a ```Warning```, which may be printed to ```stderr```, ignored completely, or thrown like a normal ```Exception``` (potentially crashing your application) depending upon the precise ```Warning``` subclass emitted and how you've configured your Warnings Filter. By default, warnings will be printed to ```stderr``` or ignored.
* Warnings emitted by ```warnings.warn``` are often useful to know about, but easy to miss (especially if you're running a Python program in a background process and not capturing ```stderr```). For that reason, it can be helpful to have them logged.
To use them, Python provides a built-in integration between the ```logging``` module and the ```warnings``` module to let you do this; just call ```logging.captureWarnings(True)``` at the start of your script and all warnings emitted by the ```warnings``` module will automatically be logged at level ```WARNING```. | {
"type": null,
"score": null,
"agent": null
} | null | {"n_characters": 112, "passed_quality_check": "False", "flesch_reading_ease": 76.53625000000002, "entropy": 0.6705798323660245} |
In Greek mythology, the words "Titan" and "God" seem to be used interchangeably. For example, Zeus is a God, but Cronus (his father) was a Titan. So what is the difference between a Titan and a God in Greek mythology? | null | [] | A 'god' is synonymous to a 'deity'; the Titans and Titanesses were gods, they were members of the second order of divine beings - after Gaia and Uranus, and the other primordial deities.
Cronus and his fellow Titans comprised the second Ancient Greek pantheon (the first comprising of deities such as Ananke, Gaea, and Ouranos), which the younger generation of gods, including Zeus, would later usurp, as the Titans usurped the primordial gods before them. (For more information on this concept of change in dominant deities, read up on the theory of The Golden Age, particularly that of Hesiod, who also wrote the only surviving account of the Titans in his Theogony.)
As for why the Titans have their own order name as opposed to Zeus et al who are simply 'gods', there is debate as to the exact etymology of the word 'titan' but this quote from Hesiod's Theogony states:
"But these sons whom he begot himself great Heaven used to call Titans (Strainers) in reproach, for he said that they strained (τιταίνοντας) and did presumptuous a fearful deed, and that vengeance (τίσιν) for it would come afterwards."
Trans. Evelyn-White, here with side-by-side Greek.
However this is not to say Hesiod was correct; he himself was writing long after the supposed Golden Age ended. | {
"type": null,
"score": null,
"agent": null
} | null | {"n_characters": 178, "passed_quality_check": "True", "flesch_reading_ease": 76.97666666666669, "entropy": 1.8399997064505589} |
How do weather models work? | null | [] | All numerical atmospheric models are built around calculations derived from primitive equations that describe atmospheric flow. Vilhelm Bjerknes discovered the relationships and thereby became the father of numerical weather prediction. Conceptually, the equations can be thought of as describing how a parcel of air would move in relationship to its surroundings. For instance, we learn at a young age that hot air rises. The hydrostatic vertical momentum equation explains why and quantifies under what condictions hot air would stop rising. (As the air rises it expands and cools until it reaches hydrostatic equilibrium.) The other equations consider other types of motion and heat transfer.
Unfortunately, the equations are nonlinear, which means that you can't simply plug in a few numbers and get useful results. Instead, weather models are simulations which divide the atmosphere into three-dimensional grids and calculate how matter and energy will flow from one cube of space into another during discrete time increments. Actual atmospheric flow is continuous, not discrete, so by necessity the models are approximations. Different models make different approximations appropriate to their specific purpose.
Numerical models have been improving over time for several reasons:
* More and better input data,
* Tighter grids, and
* Better approximations.
Increasing computational power has allowed models to use smaller grid boxes. However, the number of computations increases exponentially with the number of boxes and the process suffers diminishing returns. On the input end of things, more and better sensors improve the accuracy of the initial conditions of the model. Synoptic scale and mesoscale models take input from General Circulation Models, which helps set reasonable intial conditions. On the output end, Model Output Statistics do a remarkable job of estimating local weather by comparing the current model state with historical data of times when the model showed similar results. Finally, ensemble models take the output of several models as input and produce a range of possibly outcomes. | {
"type": null,
"score": null,
"agent": null
} | null | {"n_characters": 23, "passed_quality_check": "False", "flesch_reading_ease": 83.32000000000004, "entropy": 0.06925032382255482} |
I am currently trying to decipher Mazur's Eisenstein ideal paper (not a comment about his clarity, rather about my current abilities). One of the reasons I am doing that is that many people told me that the paper was somehow revolutionary and introduced a new method into number theory.
Could you explain exactly what subsequent developments did the paper bring, what ideas in the paper were considered more-or-less original (at the time it was published), and exactly what difficulties did these ideas resolve that people failed to resolve before the paper was published (if any)? | null | [] | First, Mazur's paper is arguably the first paper where the new ideas (and language) of the Grothendieck revolution in algebraic geometry were fully embraced and crucially used in pure number theory. Here are several notable examples: Mazur makes crucial use of the theory of finite flat group schemes to understand the behavior of the $p$-adic Tate modules of Jacobians at the prime $p$. He studies modular forms of level one over finite rings (which need not lift to characteristic zero when the residue characteristic is $2$ or $3$). He proves theorems about mod-$p$ modular forms using what are essentially comparison theorems between etale cohomology and de Rham cohomology, and many more examples. The proof of the main theorem ($\S5$, starting at page 156) is itself a very modern proof which fundamentally uses the viewpoint of $X_0(N)$ as a scheme.
Second, there are many beautiful ideas which have their original in this paper: it contains many of the first innovative ideas for studying $2$-dimensional (and beyond) Galois representations, including the link between geometric properties (multiplicity one) and arithmetic properties, geometric conceptions for studying congruences between Galois representations, understanding the importance of the finite-flat property of group schemes, and the identification of the Gorenstein property. There is a theoretical $p$-descent on the Eisenstein quotient when previously descents were almost all explicit $2$-descents with specific equations. It introduces the winding quotient, and so on.
Third, while it is a dense paper, it is dense in the best possible way: many of the small diversions could have made interesting papers on their own. Indeed, even close readers of the paper today can find connections between Mazur's asides and cutting edge mathematics. When Mazur raises a question in the text, it is almost invariably very interesting. One particular (great) habit that Mazur has is thinking about various isomorphisms and by pinning down various canonical choices identifies refined invariants. To take a random example, consider his exploration of the Shimura subgroup at the end of section 11. He finishes with a question which to a casual reader may as well be a throw-away remark. But this question was first solved by Merel, and more recently generalized in some very nice work of Emmanuel Lecouturier. Lecouturier's ideas then played an important role in the work of Michael Harris and Akshay Venkatesh. Again, one could give many more such examples of this. Very few papers have the richness of footnotes and asides that this paper does. Never forget that one of the hardest things in mathematics is coming up with interesting questions and observations, and this paper contains many great ones - it is bursting with the ideas of a truly creative mathematician.
Finally, the result itself is amazing, and (pretty much) remains the only method available for proving the main theorem (the second proof due to Mazur is very related to this one). To give a sense of how great the theorem is, note that if $E$ is a semistable elliptic curve, then either $E$ is isogenous to a curve with a $p$-torsion point, or $E[p]$ is absolutely irreducible. This result (added for clarity: explicitly, Mazur's Theorem that $E/\mathbf{Q}$ doesn't have a $p$-torsion point for $p > 7$) is crucially used in Wiles' proof of Fermat. One could certainly argue that without this paper (and how it transformed algebraic number theory) we would not have had Wiles' proof of Fermat, but it's even literally true that Mazur's theorem was (and remains so today, over 40 years later) an essential step in any proof of Fermat. | {
"type": null,
"score": null,
"agent": null
} | null | {"n_characters": 488, "passed_quality_check": "True", "flesch_reading_ease": 50.23139455782315, "entropy": 3.0567403215355635} |
Tracing XML request/responses with JAX-WS | null | [] | Here is the solution in raw code:
```Endpoint ep = Endpoint.create(new WebserviceImpl());
List handlerChain = ep.getBinding().getHandlerChain();
handlerChain.add(new SOAPLoggingHandler());
ep.getBinding().setHandlerChain(handlerChain);
ep.publish(publishURL);
```
Where SOAPLoggingHandler is (ripped from linked examples):
```package com.myfirm.util.logging.ws;
import java.io.PrintStream;
import java.util.Map;
import java.util.Set;
import javax.xml.namespace.QName;
import javax.xml.soap.SOAPMessage;
import javax.xml.ws.handler.MessageContext;
import javax.xml.ws.handler.soap.SOAPHandler;
import javax.xml.ws.handler.soap.SOAPMessageContext;
/*
* This simple SOAPHandler will output the contents of incoming
* and outgoing messages.
*/
public class SOAPLoggingHandler implements SOAPHandler {
// change this to redirect output if desired
private static PrintStream out = System.out;
public Set getHeaders() {
return null;
}
public boolean handleMessage(SOAPMessageContext smc) {
logToSystemOut(smc);
return true;
}
public boolean handleFault(SOAPMessageContext smc) {
logToSystemOut(smc);
return true;
}
// nothing to clean up
public void close(MessageContext messageContext) {
}
/*
* Check the MESSAGE_OUTBOUND_PROPERTY in the context
* to see if this is an outgoing or incoming message.
* Write a brief message to the print stream and
* output the message. The writeTo() method can throw
* SOAPException or IOException
*/
private void logToSystemOut(SOAPMessageContext smc) {
Boolean outboundProperty = (Boolean)
smc.get (MessageContext.MESSAGE_OUTBOUND_PROPERTY);
if (outboundProperty.booleanValue()) {
out.println("\nOutbound message:");
} else {
out.println("\nInbound message:");
}
SOAPMessage message = smc.getMessage();
try {
message.writeTo(out);
out.println(""); // just to add a newline
} catch (Exception e) {
out.println("Exception in handler: " + e);
}
}
}
``` | {
"type": null,
"score": null,
"agent": null
} | null | {"n_characters": 37, "passed_quality_check": "False", "flesch_reading_ease": 78.87285714285717, "entropy": 0.05231090612500756} |
In Vim, how do I insert characters at the beginning of each line in a selection?
For instance, I want to comment out a block of code by prepending ```//``` at the beginning of each line assuming my language's comment system doesn't allow block commenting like ```/* */```. How would I do this? | null | [] | The general pattern for search and replace is:
```:s/search/replace/
```
Replaces the first occurrence of 'search' with 'replace' for current line
```:s/search/replace/g
```
Replaces all occurrences of 'search' with 'replace' for current line, 'g' is short for 'global'
This command will replace each occurrence of 'search' with 'replace' for the current line only. The % is used to search over the whole file. To confirm each replacement interactively append a 'c' for confirm:
```:%s/search/replace/c
```
Interactive confirm replacing 'search' with 'replace' for the entire file
Instead of the % character you can use a line number range (note that the '^' character is a special search character for the start of line):
```:14,20s/^/#/
```
Inserts a '#' character at the start of lines 14-20
If you want to use another comment character (like //) then change your command delimiter:
```:14,20s!^!//!
```
Inserts a '//' character sequence at the start of lines 14-20
Or you can always just escape the // characters like:
```:14,20s/^/\/\//
```
Inserts a '//' character sequence at the start of lines 14-20
If you are not seeing line numbers in your editor, simply type the following
```:set nu
```
Hope this helps! | {
"type": null,
"score": null,
"agent": null
} | null | {"n_characters": 242, "passed_quality_check": "False", "flesch_reading_ease": 72.9868181818182, "entropy": 1.8550861761250104} |
Why doesn't the nuclear fusion in a star make it explode? | null | [] | The fusion that occurs in the core of the Sun occurs in nothing like the conditions you might be thinking of in a bomb, or a fusion reactor. In particular, it occurs at much lower temperatures and at a much lower rate. A cubic metre of material in the solar core is only releasing around 250 W of power by fusion.
The fusion rate is set by the temperature (and to a lesser extent, density) of the core. This in turn is set by the need for a pressure gradient to balance the weight of material pressing down on it from above. At 15 million kelvin (the core temperature, which is much lower than the temperatures in nuclear bombs or fusion reactors), the average proton has a lifetime of several billion years before being converted (with three others) into a helium nucleus. There are two reasons this is slow. First, you have to get protons, which repel each other electromagnetically, close enough together to feel the strong nuclear force. This is why high temperatures are needed. Second, because the diproton is unstable, one of the protons needs to change into a neutron via a weak force interaction, whilst it is in the unstable diproton state, to form a deuterium nucleus. This is just inherently unlikely and means the overall reaction chain to helium is very slow.
The reason there is no bomb-like explosion is because there is no problem in shifting 250 W per cubic metre away from the core, in the same way that a compost heap, which generates about the same power density, does not spontaneously explode. In the case of a star any additional heat goes into more radiation that diffuses away and in work done in expanding the star. As a result, the temperature of the core is stable. Ultimately, any additional energy emerges as sunlight at the solar photosphere.
If for some reason, the opacity to radiation in the core increased, then the temperature would rise and more energy would be generated by fusion. This is exactly what happens in the core as more hydrogen is turned into helium; the core temperature and luminosity do rise, but slowly, on timescales of billions of years. | {
"type": null,
"score": null,
"agent": null
} | null | {"n_characters": 47, "passed_quality_check": "True", "flesch_reading_ease": 80.30636363636367, "entropy": 0.37215202181165896} |
Does hot water freeze faster than cold water? | null | [] | In certain settings, cold water freezers slower than hot water. This is called the Mpemba effect:
Hot water can in fact freeze faster than cold water for a wide range of experimental conditions. This phenomenon is extremely counterintuitive, and surprising even to most scientists, but it is in fact real. It has been seen and studied in numerous experiments. While this phenomenon has been known for centuries, and was described by Aristotle, Bacon, and Descartes, it was not introduced to the modern scientific community until 1969, by a Tanzanian high school student named Mpemba.
Here are some suggested reasons:
1. Evaporation — As the initially warmer water cools to the initial temperature of the initially cooler water, it may lose significant amounts of water to evaporation. The reduced mass will make it easier for the water to cool and freeze. Then the initially warmer water can freeze before the initially cooler water, but will make less ice.
2. Dissolved Gasses — Hot water can hold less dissolved gas than cold water, and large amounts of gas escape upon boiling. So the initially warmer water may have less dissolved gas than the initially cooler water. | {
"type": null,
"score": null,
"agent": null
} | null | {"n_characters": 38, "passed_quality_check": "False", "flesch_reading_ease": 92.965, "entropy": 0.04846126165116824} |
O que é Reflection. Por que é útil?
* É recomendável usar em projetos?
* Como usar?
* Em quais situações Reflection pode ser usado? | null | [] | Reflection é um termo usado para indicar a capacidade de obter metadados sobre o próprio programa compilado, em português pode-se referir a isso como reflexão mesmo.
Como assim, metadados?
Obter informações sobre os tipos de dados, de forma dinâmica, é muito importante para códigos que atuam sobre objetos que não se conhece em tempo de design. É muito usado em bibliotecas que se propõem a lidar com objetos de tipos que elas desconhecem em sua concepção.
O ASP.NET MVC por exemplo, só é um projeto possível por causa da reflexão. De que forma ele poderia chamar os métodos dos controllers, que ele nem conhece?
Visto, isso é preciso dizer que reflexão não é sempre rápido, é necessário verificar permissões, o nível de segurança em que o código está rodando, entre outras. Por isso, quando se obtém um resultado qualquer vindo através da reflexão, geralmente essas informações são colocadas em cache, ou seja, salvas em memória.
É recomendável?
Depende do uso:
* sim, é recomendável: para criar comportamentos dinâmicos usando atributos, a única opção é reflexão
* sim, é recomendável: para chamar método que não se conhece durante a concepção do sistema, também é a única opção
* não, é abuso fazer isso: chamar um método, de uma classe conhecida em design-time usando reflexão... é absurdo
Existem tantas formas de usar reflexão que eu poderia continuar essa lista por muito tempo. Então mais vale o seguinte conselho: bom senso.
Como usar?
Existem alguns vetores de uso de reflexão que eu conheço. Para demonstrá-los, vamos considerar primeiramente a seguinte classe:
```public class Pessoa
{
public string Nome { get; set; }
}
```
* Expressões Lambda: esse é um vetor de reflexão, pois é possível usar expressões lambda para chegar aos tipos, métodos entre outros objetos refletidos, que tenham sido usados na expressão. É comumente utilizado para tornar fortemente tipada a obtenção de objetos refletidos de membros de uma classe, antes disso só era possível obter esses objetos indicando uma string com o nome do método.
```Expression> expr = p => p.Nome;
var lambda = (LambdaExpression)expr;
var cast = lambda.Body as UnaryExpression;
var member = (cast != null ? cast.Operand : lambda.Body) as MemberExpression;
var propInfo = member.Member as PropertyInfo;
Console.WriteLine(propInfo.Name); // "Nome"
```
* typeof(Tipo): essa é uma das formas mais comuns de obter informações refletidas do sistema. Serve para obter informações sobre o tipo indicado diretamente.
```var tipo = typeof(Pessoa);
Console.WriteLine(tipo.Name); // "Pessoa"
```
* obj.GetType(): tão comum quando ```typeof```, mas ao invés de se referir a um tipo diretamente, refere-se ao tipo do objeto em questão: mas tem um detalhe, não é do tipo declarado da variável, mas sim do objeto em si.
```var p = new Pessoa();
var tipo = p.GetType();
Console.WriteLine(tipo.IsSealed); // "false"
```
* Assembly: é usada para obter tipos em larga escala: por exemplo, para escanear todos os tipos existentes em um assembly, ou então em todos os assemblies carregados.
```// localizando um tipo, dentre todos os tipos carregados
var todosOsTiposCarregados = AppDomain.CurrentDomain
.GetAssemblies()
.SelectMany(a => a.GetTypes())
.Where(t => t.Name == "Pessoa");
```
E o namespace Reflection.Emit?
Não está claro na minha mente se considero isso reflexão ou não. Mas de qualquer forma, é necessário usar reflexão para gerar código dinamicamente, então tem uma conexão.
Como disse, esse namespace contém classes usadas para gerar código dinamicamente... e então compilá-los. Lembra do que eu falei sobre cache de reflexão... compilar métodos é um bom alvo de cache, pois depois de compilado os futuros usos serão muitíssimo rápidos. | {
"type": null,
"score": null,
"agent": null
} | null | {"n_characters": 110, "passed_quality_check": "False", "flesch_reading_ease": 90.08800000000002, "entropy": 0.4027817043483151} |
What is the difference between minimum and infimum? | null | [] | The minimum is attained, the infimum isn't necessarily.
Example.
Let $f(x) = \frac{1}{x}$. Then $f$ has no minimum value on the interval $(0,\infty)$. The minimum is the smallest element in the set. That is
$$
\min\{f(x)\mid x\in (0,\infty)\}
$$
doesn't exist because there is not smallest number in the set.
Another example is the minimum of the set $S = (0,1) = \{x\mid 0<x<1\}$. Here again there isn't a smallest number
$$
\min\{x\mid 0<x<1\}
$$
doesn't exist.
The infimum of a set $S$ is defined as the greatest number that is less than or equal to all elements of S (from Wikipedia). The infimum is also sometimes called the greatest lower bound.
It is a fact that every non empty set (bounded below) of real numbers has an infimum. But, as we saw, not every real set has a minimum.
So in the example
$$
\inf\{f(x)\mid x\in (0,\infty)\} = 0.
$$
Note that the infimum and the minimum can be the same. Consider for example $S = \{1,2,3,\dots\}$. Then the infimum and minimum is both $1$.
Consider this other example. If $f$ is a continuous function on a closed interval $[a,b]$, then it is a fact that $f$ attains a minimum over that interval. So here again
$$
\inf\{f(x)\mid x\in [a,b]\} = \min\{f(x)\mid x\in [a,b]\}.
$$ | {
"type": null,
"score": null,
"agent": null
} | null | {"n_characters": 44, "passed_quality_check": "False", "flesch_reading_ease": 50.66500000000002, "entropy": 0.2641616617825693} |
I had a Nespresso Vertuo Next machine. It stopped working properly and during the troubleshooting video call, the Nespresso support agent said that the machines should not be connected to a GFCI outlet because they can potentially damage the machine. As part of our home inspection when we purchased the house, it was recommended to install such outlets anywhere that water is common, including the kitchen. As such, all the outlets in our kitchen are GFCI outlets.
This call with Nespresso was the first time I'd ever seen someone claim that GFCI outlets can potentially damage coffee machines.
Can they damage Nespresso machines? If so, can they damage other coffee machines (I also have a Baratza grinder and a Bonavita drip machine I usually hook into the same outlet)? They sent us a replacement and now I am questioning where to put it. | null | [] |
> the Nespresso support agent said that the machines should not be connected to a GFCI outlet
Where do they think Nespresso machines are used? Bedrooms? No, they're kitchen appliances and GFCIs are widely used in kitchens. GFCIs are code requirements and have been for ~20 years. If they are unable to make a GFCI-compatible machine, then their machine is unfit for purpose, and they have breached their implied warranty. Also, the things use water, for Pete's sake.
That said, machines do break. On a modern "mass produced, not expected to be repaired" machine, that means it is "at end-of-life".
> This call with Nespresso was the first time I'd ever seen someone claim that GFCI outlets can potentially damage coffee machines.
LOL no. Such a claim is both nonsensical and inexcusable, and is certainly not coming from their technical department, nor has it been reviewed by their legal team, nor will you ever get them to put that in writing. Most likely it is because companies put tremendous pressure on customer service agents to have a high rate of good ratings, "close it in 1 call" and "avoid shipping replacement units". A GFCI-trip call is a worst-case scenario for an agent, and so they're going to say whatever they need to to get a satisfactory 1-call close. That is about their only option.
In a rare case, the GFCI might be end-of-life and not be working properly, but that is easily tested by plugging it into the next outlet over that is on a different GFCI, since kitchens generally have 2 independent circuits.
Now, other things can damage a coffee maker. Notably a "Lost Neutral", where the neutral lead from the power company breaks, causing neutral in the house to wander - the two 120V "halves of the service" still add up to 240V but they are lopsided, with one side higher than 120V. That is particularly hard on resistive heat appliances, since their heat increases by the square of voltage increase. Lost neutrals can be pernicious and difficult to detect, so it can go weeks or months without being noticed. It's not like service panels have voltmeters on them. | {
"type": null,
"score": null,
"agent": null
} | null | {"n_characters": 702, "passed_quality_check": "True", "flesch_reading_ease": 70.39908137583895, "entropy": 4.867485241425321} |
I have extremely bad posture, what can I do? | null | [] | You can do stretches, strengthening and breathing exercises to improve your posture. But you also have to become aware of your posture and find a way to interrupt prolonged sitting intervals with brief breaks. A good ergonomic chair is also helpful.
## Musculo-Skeletal Effects of Poor Sitting Posture
Poor sitting posture can create muscle imbalances over time.
* In general, your front stuctures shorten, with muscles like your hip flexors and pectorals tightening up - pulling you into a ball. Your crunched sitting posture caves your lower ribs down towards your abdomen which prevents the diaphragm from expanding freely.
* At the same time the muscles of your back stretch out and weaken, especially in the range where they should be working to maintain your posture.
As this posture persists over time, you joints begin to lose normal range of motion as well making it more difficult to assume a good posture.
## Correcting Slumped Sitting Posture
Tailor an exercise program to stretch tightened muscles and strengthen weakened muscles. Additionally, you need a method to become aware of your posture and correct it while you are sitting. This is difficult because as you say, your attention is on your work. Exercise programs like Yoga, Tai-Chi and Pilates are good because they all address and make you very aware of your posture, joint alignments, flexibility, core control and breathing.
Use Specific Exercises to Correct Muscle Imbalances:
* Back, Upper Back and Scapular muscles: Back Extensions strengthen your paraspinals. Use different arm positions (Y, T, W, L) to target your lower traps, mid traps, rhomboids and scapular stabilizors. Learn the feel of retracting your scapulas.
You can do these on the floor next to your desk.
Or if you prefer not to get on the floor, use resistance bands for reverse flys, wide rows, narrow rows and rotations.
Away from work, you can also strengthen these muscles using weights, cables and body weight exercises like inverted rows, cable rows, bent over rows, reverse flys etc. And squats are a good for strengthening multiple muscles important to posture.
* Core: Plank, Side Plank, Bird Dog and Bridge will stabilize your trunk and spine.
* Stretches: - Hip Flexors, Hamstrings, Abs and Pecs. The wall pec stretch stretches the pecs but also contracts the rhomboid and trapezius scapular muscles to help improve the positioning of the shoulder and shoulder blade and is easy to do at work.
* Sitting Posture and Breathing Awareness - This sitting posture exercise helps make you aware of your sitting alignment and expanding your diaphragm. Practice a few diaphramatic breaths throughout the day along with this sitting exercise to elongate your spine.
Create your ideal short exercise routine that you can do during short breaks during the day. An exercise ball next to your desk can help you target these muscles in just a few minutes:
* Lie back over the ball and stretch out your front
* Lie face down over the ball and do the Y, W, T, L exercises.
And consider using the ball as your desk chair for short periods.
## Remembering your Posture throughout the day:
This is the hard part. As your muscle imbalances begin to resolve and your diaphragmatic breathing improves, you’ll find that the slouched posture becomes less comfortable and you will automatically begin to sit better. Until then, use a timed reminder or try tying your posture corrections to tasks that you do at the computer regularly. For example, correct your posture each time you check your email or some other specific link. Turn on your web cam for visual reminders.
Keep at it until you feel the improvement. You'll have less problems going forward if you improve your posture. It either gets better - or it gets worse. | {
"type": null,
"score": null,
"agent": null
} | null | {"n_characters": 36, "passed_quality_check": "True", "flesch_reading_ease": 94.30000000000001, "entropy": 0.4066262463236905} |
How to add margin top to ```class="row"``` elements using twitter bootstrap framework? | null | [] | Editing or overriding the row in Twitter bootstrap is a bad idea, because this is a core part of the page scaffolding and you will need rows without a top margin.
To solve this, instead create a new class "top-buffer" that adds the standard margin that you need.
```.top-buffer { margin-top:20px; }```
And then use it on the row divs where you need a top margin.
```<div class="row top-buffer"> ...``` | {
"type": null,
"score": null,
"agent": null
} | null | {"n_characters": 75, "passed_quality_check": "False", "flesch_reading_ease": 75.76500000000001, "entropy": 0.24049486266482206} |
In FTL: Faster Than Light, what triggers crew experience increases? | null | [] | ###Piloting
A crew member who is controlling a helm gains one point of experience for each incoming projectile that is dodged during combat. This includes asteroids, so long as you are in combat at the time. (Note that the dodges do not count while you ship is under the effects of +60% evasion from a cloak.)
This can be explicitly trained by finding an enemy that cannot break through your shields, and letting him fire at you. Increasing your dodge chance (i.e. more power to engines) will help this occur quicker, but even with the default chance you can max out the skill eventually.
###Engines
Exactly as for piloting, a crew member manning the engines gains one point of XP for each projectile evaded while uncloaked in combat - and so it can be trained in the same way.
###Weapons
A crew member who is manning the weapons station gains one point of experience for each weapon that is fired. It doesn't matter whether it hits or misses, or whether it can do damage (e.g. a beam weapon fired at a shield).
(Multi-shot weapons such as burst lasers still count as a single fire order and hence a single XP.)
This can be trained by firing non-lethal weapons (e.g. ones that do ion damage) repeatedly against a target that cannot damage you.
###Shields
A crew member manning the shields station gains one point of experience for each shield "bubble" that is brought back up during combat. It doesn't matter why the shields were down, be that direct enemy fire, ion damage, asteroid strikes, or even manually removing and then restoring power to the shield systems.
This can be trained by letting a target that cannot break through all of your shields repeatedly take one layer down. (For fastest results, move your pilot away from the helm, or reduce your engines to 0 power. This will reduce your dodge chance to 0%, ensuring that everything hits your shields.)
Note that you used to be able to manually remove power and restore it in combat, to get points, but this has been patched.
###Repair
A crew member gains one point of experience for getting the "finishing blow" when repairing a (sub)system. Note that hull breaches provide no repair experience.
There's no specific way to train this as it requires your systems to be damaged. Though you can focus the XP gains by always having the same person run around and repair everything after a battle.
###Combat
A crew member gains one point of experience for getting the finishing blow when fighting an enemy crew member or (sub)system.
This generally gets trained in "real" situations. However, since XP is gained for taking out systems, a few extra XP can be gained from boarding if one tries to take out the enemy's systems before the crew. | {
"type": null,
"score": null,
"agent": null
} | null | {"n_characters": 58, "passed_quality_check": "True", "flesch_reading_ease": 74.86000000000003, "entropy": 0.17561556599603173} |
In Adobe Photoshop I am able to select multiple layers at once with Shift+Click.
How can I do that in GIMP? | null | [] | It's not possible. The only thing you can do is link layers together or merge them.
Instead, there are several alternative methods of handling multiple layers, each appropriate to a different task. You can chain layers to move or transform them, shift-click objects on different layers in the canvas to align them, or create a new layer based on all visible layers.
Instructions:
* Chaining Layers:
1) Make sure the Layers dialog box is open. If it is not, click on the Windows menu and select "Dockable Dialogs" > "Layers." You can also hold down the "Ctrl" key and press "L."
2) Hover your mouse between the Layer Visibility icon, which looks like an eye, and the layer thumbnail. A raised box will appear. Click this box to activate the chain.
3) Repeat this process on all of the other layers you wish to chain together. Once these layers are chained, you can move or apply transformation effects to all of them simultaneously.
* Selecting for Alignment
4) Make sure the Toolbox is open. If it is not, click on the Windows menu and select "Toolbox." You can also hold down the "Ctrl" key and press "B."
5) Click on the Alignment Tool in the Toolbox. It looks like an empty rectangle with arrows pointing away from all sides of it.
6) Press and hold the "Shift" key while clicking on the layers in the canvas. Or click somewhere off the layers and drag a selection rectangle around them. Once you have selected these layers, you can choose an alignment method.
* Merging Visible Layers
7) Make sure the Layers dialog box is open. If it is not, click on the Windows menu and select "Dockable Dialogs" > "Layers." You can also hold down the "Ctrl" key and press "L."
8) Click on the Visibility icon on any layer you do not want to include in your new layer. This will render that layer invisible.
9) Click on the Layers menu and select "New from Visible."
10) Select the option that best describes how you want the layers to be merged from the dialog box that appears, and click "Merge." This will merge all of the visible layers into a new layer but will not affect your original layers. | {
"type": null,
"score": null,
"agent": null
} | null | {"n_characters": 88, "passed_quality_check": "True", "flesch_reading_ease": 95.68818181818183, "entropy": 0.6483154261832729} |
In the python built-in open function, what is the exact difference between the modes ```w```, ```a```, ```w+```, ```a+```, and ```r+```?
In particular, the documentation implies that all of these will allow writing to the file, and says that it opens the files for "appending", "writing", and "updating" specifically, but does not define what these terms mean. | null | [] | The opening modes are exactly the same as those for the C standard library function ```fopen()```.
The BSD ```fopen``` manpage defines them as follows:
The argument mode points to a string beginning with one of the following sequences (Additional characters may follow these sequences.):
* ``r'' Open text file for reading. The stream is positioned at the beginning of the file.
* ``r+'' Open for reading and writing. The stream is positioned at the beginning of the file.
* ``w'' Truncate file to zero length or create text file for writing. The stream is positioned at the beginning of the file.
* ``w+'' Open for reading and writing. The file is created if it does not exist, otherwise it is truncated. The stream is positioned at the beginning of the file.
* ``a'' Open for writing. The file is created if it does not exist. The stream is positioned at the end of the file. Subsequent writes to the file will always end up at the then current end of file, irrespective of any intervening fseek(3) or similar.
* ``a+'' Open for reading and writing. The file is created if it does not exist. The stream is positioned at the end of the file. Subse- quent writes to the file will always end up at the then current end of file, irrespective of any intervening fseek(3) or similar. | {
"type": null,
"score": null,
"agent": null
} | null | {"n_characters": 306, "passed_quality_check": "False", "flesch_reading_ease": 11.765000000000015, "entropy": 2.9979477692855574} |
How can democracy not be the rule of the poor? | null | [] | Here are some possible explanations why democracy is not necessarily the rule of the poor:
* Voters voting by habit, not by analysis. Many people vote the same way their parents voted. Other vote exactly the opposite way. Neither involves analysis of their objective economic interest.
* A middle class believing to be rich, a lower middle class believing to be middle class. In many countries, voters tend to vote in the interest of the class they aspire to be, not in the interest of the class they actually are. Part of that is self-deception about inequality and the own position. That is visible in tax policy, for instance, where the middle class votes against taxes on the upper class.
* An upper class better able to work the political system than the poor. In some countries, attempts to simplify the access to the political system (e.g. through direct democracy or participatory budgeting) have backfired when it comes to the poor. These mechanisms remain complicated enough that it takes some familiarity with the political system to get anything done. Knowing how to organize a grassroots movement, how to write minutes of a meeting, how to collect signatures for a ballot initiative. So instead of better welfare in poor parts of town, there are better playgrounds in rich parts of town. | {
"type": null,
"score": null,
"agent": null
} | null | {"n_characters": 37, "passed_quality_check": "True", "flesch_reading_ease": 103.625, "entropy": 0.3745691314311634} |
How can I write colored text to the Windows console with C++? That is, how can I write different text with different colors? | null | [] | ANSI escape color codes are: 0=black 1=blue 2=green and so on to 15=white; colorattribute = foreground + background * 16.
To get red text on yellow use 4 + 14*16 = 228, light red on yellow would be 12 + 14*16 = 236.
Here is an example for coloring text in the Windows console:
```
HANDLE hConsole = GetStdHandle(STD_OUTPUT_HANDLE);
// you can loop k higher to see more color choices
for(int k = 1; k < 255; k++)
{
// pick the colorattribute k you want
SetConsoleTextAttribute(hConsole, k);
cout << k << " I want to be nice today!" << endl;
}
``` | {
"type": null,
"score": null,
"agent": null
} | null | {"n_characters": 102, "passed_quality_check": "True", "flesch_reading_ease": 84.8146739130435, "entropy": 0.7040853604295734} |
What's the best way to create a temporary file in Android?
Can File.createTempFile be used? The documentation is very vague about it.
In particular, it's not clear when temporary files created with ```File.createTempFile``` are deleted, if ever. | null | [] | Here is an example:
```
File outputDir = context.getCacheDir(); // context being the Activity pointer
File outputFile = File.createTempFile("prefix", ".extension", outputDir);
```
Explanation:
1. The `getCacheDir()` method returns the directory on the filesystem where your app can create cache files.
2. The `createTempFile()` function creates a temporary file in a given directory. | {
"type": null,
"score": null,
"agent": null
} | null | {"n_characters": 211, "passed_quality_check": "True", "flesch_reading_ease": 67.57625000000003, "entropy": 1.369111215103905} |
I have javascript function like this:
```function myFunction(number) {
var x=number;
...
... more initializations
//here need to wait until flag==true
while(flag==false)
{}
...
... do something
}
```
The problem is that the javascript is stuck in the while and stuck my program. so my question is how can I wait in the middle of the function until flag is true without "busy-wait"? | null | [] | Because javascript in a browser is single threaded (except for webworkers which aren't involved here) and one thread of javascript execution runs to completion before another can run, your statement:
```while(flag==false) {}
```
will simply run forever (or until the browser complains about a non-responsive javascript loop), the page will appear to be hung and no other javascript will ever get a chance to run, thus the flag's value can never be changed.
For a little more explanation, Javascript is an event driven language. That means that it runs a piece of Javascript until it returns control back to the interpreter. Then, only when it returns back to the interpreter, Javascript gets the next event from the event queue and runs it.
All things like timers and network events run through the event queue. So, when a timer fires or a network request arrives, it does not ever "interrupt" the currently running Javascript. Instead, an event gets put in the Javascript event queue and then, when the currently running Javascript finishes, the next event is pulled from the event queue and it gets its turn to run.
So, when you do an infinite loop such as ```while(flag==false) {}```, the currently running Javascript never finishes and thus the next event is never pulled from the event queue and thus the value of ```flag``` never gets changed. They key here is that Javascript is not interrupt driven. When a timer fires, it does not interrupt the currently running Javascript, run some other Javascript and then let the currently running Javascript continue. It just gets put in the event queue waiting until the currently running Javascript is done to get its turn to run.
What you need to do is rethink how your code works and find a different way to trigger whatever code you want to run when the ```flag``` value changes. Javascript is designed as an event-driven language. So, what you need to do is figure out what events you can register an interest in so you can either listen for the event that might cause the flag to change and you can examine the flag on that event or you can trigger your own event from whatever code might change the flag or you can implement a callback function that whatever code changes that flag can call your callback whenever the piece of code responsible for changing the flag value would change it's value to ```true```, it just calls the callback function and thus your code that wants to run when the flag gets set to ```true``` will get to run at the right time. This is much, much more efficient than trying to use some sort of timer to constantly check the flag value.
```function codeThatMightChangeFlag(callback) {
// do a bunch of stuff
if (condition happens to change flag value) {
// call the callback to notify other code
callback();
}
}
``` | {
"type": null,
"score": null,
"agent": null
} | null | {"n_characters": 330, "passed_quality_check": "False", "flesch_reading_ease": 79.62352272727276, "entropy": 1.8689620837639775} |
According to this famous blog post, the effective transcript length is:
$\tilde{l}_i = l_i - \mu$
where $l_i$ is the length of transcript and $\mu$ is the average fragment length. However, typically fragment length is about 300bp. What if when the transcript $l_i$ is smaller than 300? How do you compute the effective length in this case?
A related question: when computing the FPKM of a gene, how to choose a transcript? Do we choose a "canonical" transcript (how?) or combine the signals from all transcripts to a gene-level FPKM? | null | [] | The effective length is $\tilde{l}_i = l_i - \mu + 1$ (note the R code at the bottom of Harold's blog post), which in the case of $\mu
Regarding choosing a particular transcript, ideally one would use a method like salmon or kallisto (or RSEM if you have time to kill). Otherwise, your options are (A) choose the major isoform (if it's known in your tissue and condition) or (B) use a "union gene model" (sum the non-redundant exon lengths) or (C) take the median transcript length. None of those three options make much of a difference if you're comparing between samples, though they're all inferior to a salmon/kallisto/etc. metric.
Why are salmon et al. better methods? They don't use arbitrary metrics that will be the same across samples to determine the feature length. Instead, they use expectation maximization (or similarish, since at least salmon doesn't actually use EM) to quantify individual isoform usage. The effective gene length in a sample is then the average of the transcript lengths after weighting for their relative expression (yes, one should remove $\mu$ in there). This can then vary between samples, which is quite useful if you have isoform switching between samples/groups in such a way that methods A-C above would miss (think of cases where the switch is to a smaller transcript with higher coverage over it...resulting in the coverage/length in methods A-C to be tamped down). | {
"type": null,
"score": null,
"agent": null
} | null | {"n_characters": 447, "passed_quality_check": "True", "flesch_reading_ease": 77.04439003436428, "entropy": 3.1142939561539307} |
What is the significance of 1/1/1753 in SQL Server? | null | [] | The decision to use 1st January 1753 (```1753-01-01```) as the minimum date value for a datetime in SQL Server goes back to its Sybase origins.
The significance of the date itself though can be attributed to this man.
Philip Stanhope, 4th Earl of Chesterfield. Who steered the Calendar (New Style) Act 1750 through the British Parliament. This legislated for the adoption of the Gregorian calendar for Britain and its then colonies.
There were some missing days (internet archive link) in the British calendar in 1752 when the adjustment was finally made from the Julian calendar. September 3, 1752 to September 13, 1752 were lost.
Kalen Delaney explained the choice this way
> So, with 12 days lost, how can you compute dates? For example, how can you compute the number of days between October 12, 1492, and July 4, 1776? Do you include those missing 12 days? To avoid having to solve this problem, the original Sybase SQL Server developers decided not to allow dates before 1753. You can store earlier dates by using character fields, but you can't use any datetime functions with the earlier dates that you store in character fields.
The choice of 1753 does seem somewhat anglocentric however as many catholic countries in Europe had been using the calendar for 170 years before the British implementation (originally delayed due to opposition by the church). Conversely many countries did not reform their calendars until much later, 1918 in Russia. Indeed the October Revolution of 1917 started on 7 November under the Gregorian calendar.
Both ```datetime``` and the new ```datetime2``` datatype mentioned in Joe's answer do not attempt to account for these local differences and simply use the Gregorian Calendar.
So with the greater range of ```datetime2```
```SELECT CONVERT(VARCHAR, DATEADD(DAY,-5,CAST('1752-09-13' AS DATETIME2)),100)
```
Returns
```Sep 8 1752 12:00AM
```
One final point with the ```datetime2``` data type is that it uses the proleptic Gregorian calendar projected backwards to well before it was actually invented so is of limited use in dealing with historic dates.
This contrasts with other Software implementations such as the Java Gregorian Calendar class which defaults to following the Julian Calendar for dates until October 4, 1582 then jumping to October 15, 1582 in the new Gregorian calendar. It correctly handles the Julian model of leap year before that date and the Gregorian model after that date. The cutover date may be changed by the caller by calling ```setGregorianChange()```.
A fairly entertaining article discussing some more peculiarities with the adoption of the calendar can be found here. | {
"type": null,
"score": null,
"agent": null
} | null | {"n_characters": 43, "passed_quality_check": "True", "flesch_reading_ease": 75.50000000000003, "entropy": 0.2977188087634301} |
I saw this video where someone says that electromagnetic wave is a chain reaction of electric and magnetic fields creating each other so the chain of wave moves forward.
I wonder where the photon is in this explanation. What is the relation between electromagnetic wave and photon? | null | [] | Both the wave theory of light and the particle theory of light are approximations to a deeper theory called Quantum Electrodynamics (QED for short). Light is not a wave nor a particle but instead it is an excitation in a quantum field.
QED is a complicated theory, so while it is possible to do calculations directly in QED we often find it simpler to use an approximation. The wave theory of light is often a good approximation when we are looking at how light propagates, and the particle theory of light is often a good approximation when we are looking at how light interacts i.e. exchanges energy with something else.
So it isn't really possible to answer the question where the photon is in this explanation. In general if you're looking at a system, like the one in the video, where the wave theory is a good description of light you'll find the photon theory to be a poor description of light, and vice versa. The two ways of looking at light are complementary. | {
"type": null,
"score": null,
"agent": null
} | null | {"n_characters": 236, "passed_quality_check": "True", "flesch_reading_ease": 65.4575, "entropy": 1.5093971555519847} |
In The Light Fantastic, after talking about the dimensions of the Pyramid of Tsort, it says
All in all, it was a lot of effort to go through just to sharpen a razor.
What's the joke here? | null | [] | In the real world, there is or was a belief, brought on by the Egyptomania of the early 20th century, that pyramids had special powers—including, specifically, the power to sharpen or maintain the sharpness of razor blades.
Pyramid power refers to the belief that the ancient Egyptian pyramids and objects of similar shape can confer a variety of benefits. Among these assumed properties are the ability to preserve foods, sharpen or maintain the sharpness of razor blades, improve health, function "as a thought-form incubator", trigger sexual urges, and cause other effects. Such unverified theories regarding pyramids are collectively known as pyramidology.
Czechoslovakian Karel Drbal even patented a pyramid-shaped device specifically designed for razor blades. Among the specifications:
It is beneficial to leave a new blade in the pyramid one to two weeks before using it. It is essential to place it there immediately after the first shave, and not the old, dull one. But it is possible to use an old one, if it is properly resharpened. The blade placed using the method above is left unobstructed until the next shave. The west edge should always face west. It improves the sharpening effect.
| {
"type": null,
"score": null,
"agent": null
} | null | {"n_characters": 155, "passed_quality_check": "True", "flesch_reading_ease": 85.08865384615387, "entropy": 1.6883603722125626} |
After a ```git pull origin master```, I get the following message:
warning: Pulling without specifying how to reconcile divergent branches is
discouraged. You can squelch this message by running one of the following
commands sometime before your next pull:
git config pull.rebase false # merge (the default strategy)
git config pull.rebase true # rebase
git config pull.ff only # fast-forward only
You can replace "git config" with "git config --global" to set a default
preference for all repositories. You can also pass --rebase, --no-rebase,
or --ff-only on the command line to override the configured default per
invocation.
remote: Enumerating objects: 4, done.
remote: Counting objects: 100% (4/4), done.
remote: Compressing objects: 100% (4/4), done.
remote: Total 4 (delta 0), reused 0 (delta 0), pack-reused 0
Unpacking objects: 100% (4/4), 51.49 KiB | 850.00 KiB/s, done.
The pull seems successful, but I am unsure.
What can I do to fix this? | null | [] | This is a new warning added in Git 2.27:
* "git pull" issues a warning message until the pull.rebase configuration variable is explicitly given, which some existing users may find annoying---those who prefer not to rebase need to set the variable to false to squelch the warning.
To remove the warning, set one of the suggested values to your preferred default behaviour for git pull if you don't specify behaviour on the command line (using --ff, --no-ff, --ff-only, --rebase). In all cases, git will attempt a fast-forward merge if possible. The settings control what happens when there are changes in your branch but not present in the remote branch.
```
git config pull.rebase false # merge (the default strategy)
```
This is the existing default behaviour; set this for no warning, and no change in behaviour; git will merge the remote branch into your local one.
```
git config pull.rebase true # rebase
```
Here, git will attempt to rebase your changes on top of the remote branch.
```
git config pull.ff only # fast-forward only
```
If a fast-forward merge is not possible, git will refuse to proceed.
> Refuse to merge and exit with a non-zero status unless the current HEAD is already up-to-date or the merge can be resolved as a fast-forward | {
"type": null,
"score": null,
"agent": null
} | null | {"n_characters": 842, "passed_quality_check": "False", "flesch_reading_ease": 63.18752184769039, "entropy": 5.133710900335823} |
In India, we eat rice using our fingers. Generally in the West, a fork or spoon is used. I have tried eating rice with spoon but I don't feel satisfied with it.
We eat dry rice but we mix curry and vegetables with it and then eat it with our hands.
Is there a way to eat rice with your hands in front of Westerners such that it doesn't appear to be disgusting to them? By disgusting I mean that they shouldn't feel like vomiting or looking away to avoid me. Even though in India we eat with our fingers, many people eat such that their whole palm is covered with food. That indeed looks disgusting.
I cannot avoid hurting them by following different etiquette, but I certainly want to maintain a degree of cleanliness. | null | [] | First and foremost, you can eat the way you want. When it comes to western culture, the only thing that is generally disliked are noises, eating with the mouth wide open (included speaking with full mouth), and (excessively) dirtying the table. Eating rice with hands will be seen (mostly) as strange, not disgusting.
Second, context. Are you eating at a formal dinner with your colleagues and a client? Eat with the fork, as everybody else. Are you eating at the work's canteen? Eat the way you prefer and let other deal with their limits. And so on.
Third, respect and open mindness: you are worried other peoples find you disgusting for eating rice with hands, but at the same time you show the same "closed mind" (in a way, ok? I'm not trying to offend you) in wanting to eat rice with hands at all cost otherwise you will not feel satisfied; believe me, eating with a fork will not starve you as will not change food nutritional content.
Even more if you are eating at a foreign restaurant, e.g. at an italian restaurant rice is cooked and thought to be eatean with a fork. Respect local culture, open yourself to a new experience, and try to learn a different way to enjoy food.
In the end: feel free to do whatever fit you most, don't be afraid of other people judging you (their fault), but at the same time don't close yourself in your garden (your fault). | {
"type": null,
"score": null,
"agent": null
} | null | {"n_characters": 587, "passed_quality_check": "True", "flesch_reading_ease": 87.43297482014391, "entropy": 4.583489101608835} |
The typical argument goes like this:
Without net neutrality, cable companies could censor websites, favoring their own business partners.
Typically, proponents of legislation point to some perceived injustice, and argue that new laws are needed to address it. But the very use of the subjunctive in the quotation (could censor), suggests that this might be considered by its opponents as a solution in search of a problem. If so, why haven't they used that rhetorical tactic? Conversely, if such incidents have occurred, why don't the neutrality supporters cite them? | null | [] | There have been a few examples, in part, these examples have pushed this more into the spotlight. It's often under the term 'traffic shaping' if you are interested, here is Sandvine, a company that offers those shaping services. If you are interested in the peer2peer domain (which is heavily bringing this issue to the forefront) here is a national list of ISP's and their throttling practices on bit torrent users: Curious if this documented list counts as citing.
For the US:
Verizon has run testing on it.
“YouTube is being throttled to 10Mbps as well,” one person wrote on Thursday. “In the ‘stats for nerds,’ it would load at roughly 1,250KBps which translates to 10Mbps. Put the VPN on and that number tripled easily. Didn’t have an issue playing 1080p in 60fps, though.”
(part of the problem with throttling is if you know your way around it, there isn't an issue. Tax on the non-tech savvy).
Verizon stated they were not throttling, however, articles suggest they did and still are. Here is an article stating Verizon is actively throttling video connections over their network. Over to Comcast who has been engaging in this practice but not admitting it until much more recently:
* When Comcast throttled BitTorrent users, they had a lawsuit brought against them that won.
* Sprint: They deny they do, though it's apparent they can. User backlash results in people monitoring if they are being throttled. Of course, there are several accusations.
* Bell and Rogers in Canada.
Telus has also blocked union sites and have engaged in some pretty underhanded anti-union tactics. Not sure if blocking sites fall under net neutrality though. | {
"type": null,
"score": null,
"agent": null
} | null | {"n_characters": 483, "passed_quality_check": "True", "flesch_reading_ease": 60.06749450549455, "entropy": 3.278930459694573} |
Does having a longer Ethernet cable slow your connection? | null | [] | For all practical purposes, there will be no effect on the speed of your connection.
There will be a very insignificant amount of delay due to long cables. This won't affect the maximum speed of your connection, but it would cause some latency. pjc50 points out that it's about a nanosecond for every foot of cable length, which is a good rule of thumb used by many engineers when developing systems which are very dependent on latencies on those timescales.
In reality, you will never notice a difference. A "fast" ping time on the Internet is 10 ms, which is 10,000,000 ns. Adding even a few hundred feet of cable isn't going to have a noticeable effect at that point. In fact, nearly every step of the way involves delays which are more extreme than those seen from signal propagation. For example, most consumer grade routers will wait for the last byte of an incoming packet to be received and check it for errors before sending the first byte of the packet on its way. This delay will be on the order of 5,000 ns! Given that the maximum length of cable you can run (per the Ethernet spec) is 300 ft, the cable length could never cause more than 300 ns of delay due to the cable! | {
"type": null,
"score": null,
"agent": null
} | null | {"n_characters": 49, "passed_quality_check": "True", "flesch_reading_ease": 66.10000000000002, "entropy": 0.13613456534862975} |
Border around formatted text in Inkscape | null | [] | Here is solution step by step:
* Create some flowed text by using the text tool to first drag a rectangle and then type.
* Separately, create a rectangle roughly around the text (really, the rectangle can be anywhere)
* With the rectangle selected, chose "Path" > "Linked Offset". Then grab the small diamond and drag it in to make a second, smaller rectangle.
* Select the text and the smaller rectangle and choose "Text" > "Flow into frame". Then select the smaller rectangle and remove the stroke to make it invisible. Now the text will flow within the smaller (invisible) rectangle, which is dynamically linked to the shape of the larger (visible) rectangle.
* Then, if you like, you can add a small border radius on the visible rectangle and you get this nice looking result.
* If you do need to modify the space between the text and the rectangle, you can carefully select it by grabbing it with the text and the subtracting the text from the selection, or you can drill down to it within the XML Editor. When you do get it selected, then you can adjust the margin.
* Finally, group the text and the two rectangles, and you're all set! | {
"type": null,
"score": null,
"agent": null
} | null | {"n_characters": 35, "passed_quality_check": "False", "flesch_reading_ease": 59.745000000000005, "entropy": 0.051268020591018794} |
I learned about the equilibrium constant. Now, I've seen that the equilibrium constant of burning is extremely small $(K \ll 1)$. here, I have a question. you see, $K$ is still NOT 0, which means that the forward reactions happen at least a tiny bit. Then, shouldn't we see some parts of anything burning at least a little bit? | null | [] | The equilibrium constant for combustion of organic matter in air with oxygen is not small, but extremely large ($K_\mathrm{eq} \gg 1$), as is expected from a reaction that is simultaneously very exothermic and (usually) increases entropy due to the formation of more gaseous molecules than the input oxygen.
The major reason carbon-based life can exist at ambient temperature in an oxygen atmosphere is purely kinetic, not thermodynamic. You, the tree outside and everything else made of carbon is right now undergoing continuous combustion. However, in the absence of catalyst, this process is too slow below a couple hundred degrees Celsius for it to be self-sustaining. More technically, combustion of organic matter is a highly exergonic process, but the activation energy is high. The meagre amount of heat generated by the handful of molecules reacting is too quickly diluted into the surroundings, and the reaction does not accelerate and spiral out of control (a fire, as described by the eternal Feynman).
Very luckily for us, Life figured out this vast untapped source of chemical energy held back by kinetics approximately three billion years ago and developed a whole metabolic process to extract this energy in a stepwise fashion using catalysis, which we call aerobic respiration. Without it, multicellular organisms could well never had evolved. | {
"type": null,
"score": null,
"agent": null
} | null | {"n_characters": 269, "passed_quality_check": "True", "flesch_reading_ease": 83.17920081967215, "entropy": 2.452183698182665} |
The name "Bleach" seems to be having no relevance to the plot unlike most other series. Was it just chosen at Kubo-sensei's whim or does it have some significance? Maybe some cultural significance associated with shinigami, etc. that I am now aware of? | null | [] | Kubo himself has commented on this. Below is the quote from Bleach Answers Wiki:
> The title wasn't Bleach when Tite Kubo decided to draw a story about Shinigami. This was before he drew the one-shot manga that appeared in Akamaru Jump. The weapon wasn't a sword, but a scythe. Only Rukia had a scythe and the other characters used guns. At that point, the title was Snipe (as in "Sniper"). Right before he started drawing, he began to think that a sword would be better and realized he couldn't use Snipe as a title anymore. He began looking for a title that grasped the bigger picture. Shinigami are associated with the color black, but it would have been boring to use "black". "White", on the other hand, can suggest black as a complementary color. So Tite Kubo chose "Bleach" to evoke the impression of the color white.
There are also other theories, which aren't as well supported by interviews with Kubo but still seem to have some evidence supporting them. One is that shinigami "cleanse" souls, similar to what bleach does (Kubo himself mentioned this in an interview, but it's not clear whether he noticed this before or after deciding on the name). Another is that Kubo is a fan of the band Nirvana, and their first album is Bleach. So there might be more than one reason, but the quoted one above is the one he usually gives when asked about it. | {
"type": null,
"score": null,
"agent": null
} | null | {"n_characters": 210, "passed_quality_check": "True", "flesch_reading_ease": 74.66196969696972, "entropy": 1.4045855791130382} |
Why don't rally cars have airbags? | null | [] | If you are ever lucky enough to drive a competition car, the first thing that will strike you is that you can't move. Once you're in your bucket seat with your fire retardant overalls on, wearing a full face helmet and neck brace and strapped firmly in with your multipoint harness, you'll feel like you've been pinned to the seat.
You can't look over your shoulder like you can in a road car. You can't reach the location where the glove-box is. In the event of a collision, the chances of your helmet coming into contact with the steering wheel are absolutely minimal.
However, if you watch rally cars in action, especially on gravel, you'll see them yumping (rally speak of hitting a jump), you'll see them in and out of ditches, ruts and rivers. Many of the impacts a rally car suffers during normal use would far exceed the trigger points to deploy a standard air-bag. Deployment of such a device when landing an airborne car that's travelling in excess of 120mph (200km/h) with spectator lined roads could have deadly consequences.
Similarly, in racing when there are multiple cars jostling for position on the same piece of track, deployment would be likely from all the "door handling" (a term to describe "acceptable" contact between race cars) and if one in the middle of a pack were to deploy, forcing the drivers hands away from the wheel, it would likely cause carnage.
In summary, air bags are not fitted to competition cars because they could pose a serious and significant risk to both the occupants of the cars and people outside such as other competitors, marshals and spectators. | {
"type": null,
"score": null,
"agent": null
} | null | {"n_characters": 29, "passed_quality_check": "False", "flesch_reading_ease": 116.14500000000001, "entropy": 0.13038555908550933} |
Was the Millennium Falcon a one-off or was it mass produced? | null | [] | The Millennium Falcon is not unique; it was a model YT-1300f Corellian light freighter. This implies that there are other similar ships, including other variations on the YT-1300 model.
You can find a lot of information about the Millennium Falcon in a companion book, Star Wars: The Force Awakens: Incredible Cross-Sections. The model number also appears on-screen at least once: in Star Wars: Rebels episode "Fighter Flight" on a poster.
Note, however, that the Millennium Falcon has been highly customized; some of the custom work is mentioned in The Force Awakens. It's not clear just how much of the ship is stock and how much is custom.
If you're willing to go outside Canon into Legends, the model number of the ship, and other similar Corellian light freighters, can be seen in some of the video games, and there is an entire novel called Millennium Falcon that details its history. The ship was serial #YT-1300 492727ZED, one of many such vessels, but from the beginning the ship was shown to have some "quirks" to distinguish it from its counterparts.
In Legends, the ship was also destroyed and rebuilt from scratch at least once, so it's arguable as to whether it's still the same ship or not. | {
"type": null,
"score": null,
"agent": null
} | null | {"n_characters": 50, "passed_quality_check": "True", "flesch_reading_ease": 74.805, "entropy": 0.3599635743691582} |
Usually when I see lists of things to do to be more energy efficient, they require one to own their own home. What can I do to be more energy efficient in an apartment?
For example, I can't install solar panels, I can't upgrade/change my appliances, I can't install better insulation or windows, and I can't install a programmable thermostat.
Pretty much the only thing I can do (and have done) is switch all of my bulbs to CFLs. I also keep all of my electronics on power strips which I turn off when I leave my apartment and when I'm sleeping. | null | [] | There are a lot of little things you can do that can add up. Here are some suggestions:
* Wash clothes in cold water only.
* Turn your freezer/refrigerator up a few degrees.
* Purchase insulated curtains/shades, and close them during the day when you are at work.
* Make sure your furniture is arranged so that it is not blocking any vents or radiators.
* Purchase a draft guard to place at the bottom of your door leading to the outside where temperature-controlled air can escape from your apartment.
* Install a low flow shower water head. Shower head screw on/off easily. Keep the old head to replace when/if you move.
* Clean Your Refrigerator: It's unlikely that you'll purchase new energy efficient appliances for your rental, but you can tweak the ones you already have. For example, by cleaning the coils on the back of your fridge you'll help one of the biggest energy hogging appliances run more efficiently.
* Use Energy Efficient Bulbs, even if you're put off by the slightly higher upfront cost, you can simply unscrew them and take them with you when it's time to move.
* Use Power Strips: A lot of energy is wasted through electronics left plugged in, and the resulting vampire energy. Put a stop to it by using one of a number of cool, smart energy strips, which will turn off vampire energy when the electronics are not in use.
* Use a Humidifier & Lower Your Heat: Keeping humidity levels up during the winter is not only good for your health, but also for your air and furniture. It also helps keep the ambient air temperature feeling warmer than dry air, which means you can turn your thermostat down. If your HVAC system doesn't have a humidifier built-in, then you can get an inexpensive and portable humidifier to use during dry conditions. | {
"type": null,
"score": null,
"agent": null
} | null | {"n_characters": 446, "passed_quality_check": "False", "flesch_reading_ease": 75.94285714285715, "entropy": 4.024585647105939} |
Is there any way to exit ```less``` without clearing the screen? | null | [] | This is actually a function of the terminal emulator you are using (xterm, gnome-terminal, konsole, screen). An alternate screen, or altscreen, gets launched when programs such as ```less``` or ```vim``` are invoked. This altscreen has no history buffer and exits immediately when you quit the program, switching back to the original screen which restores the previous window content history and placement.
You can prevent ```less``` from launch in an altscreen by passing the argument "-X".
```less -X /path/to/some/file```
You can also pass "-X" as an environment variable. So if you are using ```bash```, place this in ```~/.bashrc```:
```export LESS="-X"
```
However, this disbles the termcap (terminal capability) initialization and deinitialization, so other views when you use ```less``` may appear off.
Another option would be to use ```screen``` and set the option ```altscreen off``` in your ```~/.screenrc```. ```less``` will not clear the screen and should preserve color formatting. Presumably ```tmux``` will have the same option. | {
"type": null,
"score": null,
"agent": null
} | null | {"n_characters": 54, "passed_quality_check": "False", "flesch_reading_ease": 100.00352941176472, "entropy": 0.25864936202650485} |
How can I do 'insert if not exists' in MySQL? | null | [] | In MySQL, ```ON DUPLICATE KEY UPDATE``` or ```INSERT IGNORE INTO table``` can be viable solutions.
* An example of ```ON DUPLICATE KEY UPDATE``` update based on mysql.com:
```
INSERT INTO table (a,b,c) VALUES (1,2,3)
ON DUPLICATE KEY UPDATE c=c+1;
UPDATE table SET c=c+1 WHERE a=1;
```
* An example of INSERT IGNORE based on mysql.com:
```
INSERT [LOW_PRIORITY | DELAYED | HIGH_PRIORITY] [IGNORE]
[INTO] tbl_name [(col_name,...)]
{VALUES | VALUE} ({expr | DEFAULT},...),(...),...
[ ON DUPLICATE KEY UPDATE
col_name=expr
[, col_name=expr] ... ]
```
Or:
```
INSERT [LOW_PRIORITY | DELAYED | HIGH_PRIORITY] [IGNORE]
[INTO] tbl_name
SET col_name={expr | DEFAULT}, ...
[ ON DUPLICATE KEY UPDATE
col_name=expr
[, col_name=expr] ... ]
```
Or:
```
INSERT [LOW_PRIORITY | HIGH_PRIORITY] [IGNORE]
[INTO] tbl_name [(col_name,...)]
SELECT ...
[ ON DUPLICATE KEY UPDATE
col_name=expr
[, col_name=expr] ... ]
``` | {
"type": null,
"score": null,
"agent": null
} | null | {"n_characters": 36, "passed_quality_check": "False", "flesch_reading_ease": 95.165, "entropy": 0.27140618981516257} |
What does ```class``` do in Ruby? | null | [] | First, the ```class``` syntax opens up ```foo```'s singleton class (eigenclass). This allows you to specialise the behaviour of methods called on that specific object.
```a = 'foo'
class << a
def inspect
'"bar"'
end
end
a.inspect # => "bar"
a = 'foo' # new object, new singleton class
a.inspect # => "foo"
```
Now, to answer the question: ```class``` opens up ```self```'s singleton class, so that methods can be redefined for the current ```self``` object (which inside a class or module body is the class or module itself). Usually, this is used to define class/module ("static") methods:
```class String
class << self
def value_of obj
obj.to_s
end
end
end
String.value_of 42 # => "42"
```
This can also be written as a shorthand:
```class String
def self.value_of obj
obj.to_s
end
end
```
Or even shorter:
```def String.value_of obj
obj.to_s
end
```
When inside a function definition, ```self``` refers to the object the function is being called with. In this case, ```class``` opens the singleton class for that object; one use of that is to implement a poor man's state machine:
```class StateMachineExample
def process obj
process_hook obj
end
private
def process_state_1 obj
# ...
class << self
alias process_hook process_state_2
end
end
def process_state_2 obj
# ...
class << self
alias process_hook process_state_1
end
end
# Set up initial state
alias process_hook process_state_1
end
```
So, in the example above, each instance of ```StateMachineExample``` has ```process_hook``` aliased to ```process_state_1```, but note how in the latter, it can redefine ```process_hook``` (for ```self``` only, not affecting other ```StateMachineExample``` instances) to ```process_state_2```. So, each time a caller calls the ```process``` method (which calls the redefinable ```process_hook```), the behaviour changes depending on what state it's in. | {
"type": null,
"score": null,
"agent": null
} | null | {"n_characters": 28, "passed_quality_check": "False", "flesch_reading_ease": 110.055, "entropy": 0.12367299371335647} |
"I have a problem where i'm initialising a variable on the scope in a controller. Then it gets changed in another controller when a user logs in. This variable is used to control things such as the navigation bar and restricts access to parts of the site depending on the type of user, so its important that it holds its value. The problem with it is that the controller that initialises it, gets called again by angular some how and then resets the variable back to its initial value.
I assume this is not the correct way of declaring and initialising global variables, well its not really global, so my question is what is the correct way and is there any good examples around that work with the current version of angular?" | null | [] | You've got basically 2 options for "global" variables:
* use a ```$rootScope```
* use a service
```$rootScope``` is a parent of all scopes so values exposed there will be visible in all templates and controllers. Using the ```$rootScope``` is very easy as you can simply inject it into any controller and change values in this scope. It might be convenient but has all the problems of global variables.
Services are singletons that you can inject to any controller and expose their values in a controller's scope. Services, being singletons are still 'global' but you've got far better control over where those are used and exposed.
Using services is a bit more complex, but not that much, here is an example:
```
var myApp = angular.module('myApp',[]);
myApp.factory('UserService', function() {
return {
name : 'anonymous'
};
});
```
and then in a controller:
```
function MyCtrl($scope, UserService) {
$scope.name = UserService.name;
}
``` | {
"type": null,
"score": null,
"agent": null
} | null | {"n_characters": 596, "passed_quality_check": "True", "flesch_reading_ease": 65.33975939849627, "entropy": 4.729267473738343} |
How do I initialize a TypeScript Object with a JSON-Object? | null | [] | This question is quite broad, here are two solutions.
### Solution 1: Helper Method
Here's an example of using a Helper Method that you could change to fit your needs:
```class SerializationHelper {
static toInstance(obj: T, json: string) : T {
var jsonObj = JSON.parse(json);
if (typeof obj["fromJSON"] === "function") {
obj["fromJSON"](jsonObj);
}
else {
for (var propName in jsonObj) {
obj[propName] = jsonObj[propName]
}
}
return obj;
}
}
```
Then use it:
```var json = '{"name": "John Doe"}',
foo = SerializationHelper.toInstance(new Foo(), json);
foo.GetName() === "John Doe";
```
* Advanced Deserialization
This could also allow for some custom deserialization by adding your own ```fromJSON``` method to the class (this works well with how ```JSON.stringify``` already uses the ```toJSON``` method, as shown below):
```interface IFooSerialized {
nameSomethingElse: string;
}
class Foo {
name: string;
GetName(): string { return this.name }
toJSON(): IFooSerialized {
return {
nameSomethingElse: this.name
};
}
fromJSON(obj: IFooSerialized) {
this.name = obj.nameSomethingElse;
}
}
```
Then use it:
```var foo1 = new Foo();
foo1.name = "John Doe";
var json = JSON.stringify(foo1);
json === '{"nameSomethingElse":"John Doe"}';
var foo2 = SerializationHelper.toInstance(new Foo(), json);
foo2.GetName() === "John Doe";
```
### Solution 2: Base Class
Another way you could do this is by creating your own base class:
```class Serializable {
fillFromJSON(json: string) {
var jsonObj = JSON.parse(json);
for (var propName in jsonObj) {
this[propName] = jsonObj[propName]
}
}
}
class Foo extends Serializable {
name: string;
GetName(): string { return this.name }
}
```
Then use it:
```var foo = new Foo();
foo.fillFromJSON(json);
```
There are too many different ways to implement a custom deserialization using a base class, and ultimately it depends on how your own choice. | {
"type": null,
"score": null,
"agent": null
} | null | {"n_characters": 50, "passed_quality_check": "True", "flesch_reading_ease": 80.30636363636367, "entropy": 0.3534922981301219} |
Why is digital photography so expensive? | null | [] | Digital photography is not expensive but buying into it is.
With film photography camera and lenses were a moderate investment but these lasted quite long. Even a battery in a film camera would last several years. Cameras and lenses were mechanical and much simpler than today's models. This made them require less expertise to manufacture and also made them incredibly sturdy. A lifespan of a decade for a camera and several decades for lenses was not uncommon.
However, the real cost of film photography is film and development. Each roll of film costs several dollars and gives up to 36 images. Specialized rolls were more expensive and had fewer exposures. Then, if you did not do development yourself, you had to pay for development. Also prints. Even if you did your own, chemicals and papers were recurring costs. And there was the added expensive of creating a wet darkroom. In the end, the hobby was quite costly.
With digital photography, you pay nearly all costs up-front and it comes out much cheaper for anyone who is into the art and shoots regularly. A mid-range DSLR with good lens and memory can easily be obtained for less than $2500. Many entry level ones exist for under $1000. If you were to shoot 10,000 frames before both your camera and lens dies, it would cost you between 2.5¢ to 10¢ per frame. It is far far less than even just the film for each frame. Still most cameras have shutters rated to at least 100,000 frames and lenses last much longer than cameras. Even if you only captured 1000 images, you would have a better deal than with film photography. All digital cameras come with rechargeable batteries, good for 500-1000 shots-per-charge (for DSLRs) and can be charged at least 50-100 times before needing to be replaced. That means you actually will not use up the battery before you have taken full advantage of the better value of digital photography.
Market conditions are forcing the price of digital photography to increase. The loss of market share of compact and ultra-compact digital cameras to cellphones forced manufacturers to discontinue such products and concentrate on higher-end offering. While the number of unit sales of digital cameras has been declining, the average price is going up to compensate for reduced revenue. This trend is expected to continue for a while. | {
"type": null,
"score": null,
"agent": null
} | null | {"n_characters": 35, "passed_quality_check": "False", "flesch_reading_ease": 31.545000000000016, "entropy": 0.10470262025332533} |
If user input is inserted without modification into an SQL query, then the application becomes vulnerable to SQL injection, like in the following example:
```$unsafe_variable = $_POST['user_input'];
mysql_query("INSERT INTO `table` (`column`) VALUES ('$unsafe_variable')");
```
That's because the user can input something like ```value'); DROP TABLE table;--```, and the query becomes:
```INSERT INTO `table` (`column`) VALUES('value'); DROP TABLE table;--')
```
What can be done to prevent this from happening? | null | [] | The correct way to avoid SQL injection attacks, no matter which database you use, is to separate the data from SQL, so that data stays data and will never be interpreted as commands by the SQL parser. It is possible to create an SQL statement with correctly formatted data parts, but if you don't fully understand the details, you should always use prepared statements and parameterized queries. These are SQL statements that are sent to and parsed by the database server separately from any parameters. This way it is impossible for an attacker to inject malicious SQL.
You basically have two options to achieve this:
1. Using PDO (for any supported database driver):
```
$stmt = $pdo->prepare('SELECT * FROM employees WHERE name = :name');
$stmt->execute([ 'name' => $name ]);
foreach ($stmt as $row) {
// Do something with $row
}
```
2. Using MySQLi (for MySQL):
Since PHP 8.2+ we can make use of execute_query() which prepares, binds parameters, and executes SQL statement in one method:
```
$result = $dbConnection->execute_query('SELECT * FROM employees WHERE name = ?', [$name]);
while ($row = $result->fetch_assoc()) {
// Do something with $row
}
```
Up to PHP8.1:
```
$stmt = $dbConnection->prepare('SELECT * FROM employees WHERE name = ?');
$stmt->bind_param('s', $name); // 's' specifies the variable type => 'string'
$stmt->execute();
$result = $stmt->get_result();
while ($row = $result->fetch_assoc()) {
// Do something with $row
}
```
If you're connecting to a database other than MySQL, there is a driver-specific second option that you can refer to (for example, pg_prepare() and pg_execute() for PostgreSQL). PDO is the universal option. | {
"type": null,
"score": null,
"agent": null
} | null | {"n_characters": 451, "passed_quality_check": "False", "flesch_reading_ease": 39.33700000000003, "entropy": 1.9075384930938872} |
I want to be able to output the current loop iteration to my template.
According to the docs, there is a ```loop.counter``` variable that I am trying to use:
```
{% for user in userlist %}
*
{{ user }} {{loop.counter}}
{% if loop.counter == 1 %}
This is the First user
{% endif %}
{% endfor %}
```
But is being outputed to my template. What is the correct syntax? | null | [] | The counter variable inside the loop is called loop.index in Jinja2.
```
>>> from jinja2 import Template
>>> s = "{% for element in elements %}{{loop.index}} {% endfor %}"
>>> Template(s).render(elements=["a", "b", "c", "d"])
1 2 3 4
```
In addition to `loop.index`, there is also
* `loop.index0` (index starting at `0`)
* `loop.revindex` (reverse index; ending at `1`)
* `loop.revindex0` (reverse index; ending at `0`)
etc. | {
"type": null,
"score": null,
"agent": null
} | null | {"n_characters": 306, "passed_quality_check": "False", "flesch_reading_ease": 78.72993902439025, "entropy": 2.284164206950855} |
Are the players on the same team as the DM? | null | [] | The GM isn't on a team, they’re the referee.
The way Pathfinder, D&D, and similar games work is that there are one or more player characters (the "party") that are controlled by the players.
The GM (DM, in D&D) usually doesn't run a player character (PC), but they are responsible for representing all aspects of the world, all the non-player characters (NPCs) and monsters and stuff, and adjudicating the rules.
So the GM may be controlling a character or creature that is friendly to the PCs, or a character or creature that is hostile to the PCs, or both at once! The GM is the orcs you fight and the friendly cleric that heals you and the rain that falls on you.
GMs aren't supposed to be hostile to you in general; they are generally taught to give the players a fun challenge. It's no good to be "against" the players, because you can always kill PCs just by saying "rocks fall, everyone dies!" The GM is on your team inasmuch as they want everyone to have fun. Because their role is running monsters and the other stuff that you fight, they are often in opposition to you tactically, as they are responsible for the "other team" as part of their job. Some new GMs do get confused by this and decide it's them against the party, but that ends up being a problem that has to be solved by them wising up or their players leaving.
The context of your question isn't clear, so I'll add a note that might be relevant - some GMs, especially new ones, do run a PC of their own that's part of the party, these are referred to as GMPCs (or DMPCs). This is generally frowned upon because of the conflict of interest inherent in doing it, but it happens. That GMPC may be on your side, on your side most of the time, betray you, have other plans - just like any other person in the game world might. | {
"type": null,
"score": null,
"agent": null
} | null | {"n_characters": 34, "passed_quality_check": "True", "flesch_reading_ease": 103.625, "entropy": 0.40582687234004566} |
C++ vs. The Arduino Language? | null | [] | If you have previous programming experience and you are aware of concepts as OOP, it is better to go for C/C++. The arduino language is really great for beginners, but have some limitations (e.g. you must have all your files in the same folder). And it is basically a simplification of C/C++ (you can practically copy&paste arduino code to a C/C++ file, and it will work). Also it makes sense that you can go and use a full well known IDE such as Eclipse.
Initially it is required a bit more of setup and configuration of your dev environment, but IMHO it is worth it for programmers with experience in any other language.
In any case, it won't harm you to start using the arduino language and the arduino IDE for a few days to get familiar with the arduino hardware and then move to C/C++ with Eclipse for really developing your project. | {
"type": null,
"score": null,
"agent": null
} | null | {"n_characters": 25, "passed_quality_check": "False", "flesch_reading_ease": 66.40000000000003, "entropy": 0.0479587451575922} |
How can I adapt Ubuntu to a high resolution display?
I have a display with 3200x1600px on only 11'' and everything looks really tiny. | null | [] | There are some steps to take:
1. System wide Menu and titlebar scaling
Starting with Ubuntu 14.04 we have an option that helps a bit:
Scaling Support
open the System Settings (here in english:)
```LANG=c unity-control-center
```
Go to "Displays" and set the "Scale for menu and title bars".
Since Ubuntu 17.10 the scaling can be set in
```LANG=c gnome-control-center
```
Go to ```Settings > Devices > Displays``` there
see also: How to find and change the screen DPI?
2. Universal Access
Go to "Universal Access" (```unity-control-center universal-access```) and select "Large Text".
Note: not all applications handle this correctly, some will not reserve the extra space, so some UI elements are not accessible with this option!
3.Increase unity dock size
In ```unity-control-center```->```Appearance```->```Look``` at the botom, you can adjust the size.
4. Adapt Firefox
see: Adjust Firefox and Thunderbird to a High DPI touchscreen display (retina)
(or use Chrome, which works fine since Version 41.0.2272.76 Ubuntu 14.10, though Chrome will need to be restarted to take effect)
5. Increase font in Pidgin
There is a plugin you can install:
```sudo apt-get install pidgin-extprefs
```
Then you can increase the font in Plugins->Extended Prefs
6. Create starter for applications that still don't scale
Some applications still don't obey the global scaling (mainly java) for those few applications you can create a starter to only Fix scaling of java-based applications for a high DPI screen
Note: In older Ubuntu versions, with unity-tweak-util in the section "Fonts" you can set the "Text Scaling Factor" to 2.0. This will scale the fonts in most applications to double size. | {
"type": null,
"score": null,
"agent": null
} | null | {"n_characters": 111, "passed_quality_check": "True", "flesch_reading_ease": 89.24350000000001, "entropy": 0.8008197238707688} |
Say I want to make a file:
```filename = "/foo/bar/baz.txt"
with open(filename, "w") as f:
f.write("FOOBAR")
```
This gives an ```IOError```, since ```/foo/bar``` does not exist.
What is the most pythonic way to generate those directories automatically? Is it necessary for me explicitly call ```os.path.exists``` and ```os.mkdir``` on every single one (i.e., /foo, then /foo/bar)? | null | [] | In Python 3.2+, using the APIs requested by the OP, you can elegantly do the following:
```
import os
filename = "/foo/bar/baz.txt"
os.makedirs(os.path.dirname(filename), exist_ok=True)
with open(filename, "w") as f:
f.write("FOOBAR")
```
With the Pathlib module (introduced in Python 3.4), there is an alternate syntax (thanks David258):
```from pathlib import Path
output_file = Path("/foo/bar/baz.txt")
output_file.parent.mkdir(exist_ok=True, parents=True)
output_file.write_text("FOOBAR")
```
In older python, there is a less elegant way:
The ```os.makedirs``` function does this. Try the following:
```import os
import errno
filename = "/foo/bar/baz.txt"
if not os.path.exists(os.path.dirname(filename)):
try:
os.makedirs(os.path.dirname(filename))
except OSError as exc: # Guard against race condition
if exc.errno != errno.EEXIST:
raise
with open(filename, "w") as f:
f.write("FOOBAR")
```
The reason to add the ```try-except``` block is to handle the case when the directory was created between the ```os.path.exists``` and the ```os.makedirs``` calls, so that to protect us from race conditions. | {
"type": null,
"score": null,
"agent": null
} | null | {"n_characters": 363, "passed_quality_check": "False", "flesch_reading_ease": 64.33166666666669, "entropy": 1.6951573560639428} |
Assume that Jane Doe has published a paper in 2010 where she has developed a model or a theorem or a similar result, let’s say, that it relates to growth.
Now assume that Jane Doe is writing another paper in 2015, where she refers to the model/theorem from her paper in 2010.
Is it acceptable for Jane to write something like the following?
Doe’s growth model (2010), implies that ...
Doe’s growth theorem (2010) implies that ...
The Doe growth model (2010) implies ...
| null | [] | This is, unfortunately, a case where English grammar can be tricky and exactly how you phrase things is going to matter.
It is often seen a presumptuous to name something after yourself: "Newton's Laws" and "Hawking radiation" and "Rayleigh scattering" are retrospective judgements of significance by the community. Claiming a similar name is an assertion in advance that your work will be at a similar level of significance.
The problem, then, is that if you say "Doe's growth model", it is ambiguous whether that is intended to be a construction like "Hawking radiation" or whether it just means "the growth model that happens to have been developed by Doe."
This potential problem can be avoided by rephrasing to avoid the parallel construction: "the growth model by Doe et al (2010)", or better yet, "the growth model presented in (Doe, 2010)." The point here is that the significant item under discussion is the growth model, not Doe, and the sentence should be phrased to make that as clear as possible.
Finally, note that these sorts of phrasings can work both for double blind review and for review where the authors' identities are known: even when the identity of the author is known, the important thing should be the relationship between the work, not the fraction of authors that are shared between two papers (unless you are specifically trying to talk about independent co-discovery). | {
"type": null,
"score": null,
"agent": null
} | null | {"n_characters": 393, "passed_quality_check": "False", "flesch_reading_ease": 70.3239697802198, "entropy": 2.50987069321269} |
I've been with my current employer for about a year now. Due to the way the company is setup, I'm the only one with knowledge on a process that is quite important to the company. The company is going through some restructuring, and has been letting people go. As the newest guy in my department, I'm obviously concerned.
My question though, is if I am let go, am I obligated to spend my time teaching someone else this process that only I know about? | null | [] | As long as you're still getting paid, you're obligated to do whatever your employer tells you to do during your official work hours (if it's within the context of your job and legal).
Refusing to do so would, at best, burn bridges and, at worst, result in them instantly terminating your contract and no longer paying you or potentially get you into legal trouble.
If they're no longer paying you, you're not obligated to do anything. You're also not obligated to work overtime (paid or not).
If you're asked to work overtime or do unpaid work, it would be reasonable to decline or (if it's a non-trivial amount of hours) request additional payment. Although this is not to say they would think it's reasonable.
Your contract may, of course, create different obligations, and you should adhere to those instead of what's written here. Any relevant notice periods should also be specified there, in some other legal document or in some regional employment law.
If you want to leave a good impression and avoid burning bridges, it would be advisable to try to pass on knowledge (through teaching or documentation) even if you're not specifically asked to do so. You might even want to (gently) push back on other tasks and emphasise the importance of passing on your knowledge. Although some won't be able to see the importance of transferring what you know. | {
"type": null,
"score": null,
"agent": null
} | null | {"n_characters": 368, "passed_quality_check": "True", "flesch_reading_ease": 79.10235294117649, "entropy": 3.547036036292197} |
Bash test: what does "=~" do? | null | [] | The ```~``` is actually part of the operator ```=~``` which performs a regular expression match of the string to its left to the extended regular expression on its right.
```[[ "string" =~ pattern ]]
```
Note that the string should be quoted, and that the regular expression shouldn't be quoted.
A similar operator is used in the Perl programming language.
The regular expressions understood by ```bash``` are the same as those that GNU ```grep``` understands with the ```-E``` flag, i.e. the extended set of regular expressions.
Somewhat off-topic, but good to know:
When matching against a regular expression containing capturing groups, the part of the string captured by each group is available in the ```BASH_REMATCH``` array. The zeroth/first entry in this array corresponds to ```&``` in the replacement pattern of ```sed```'s substitution command (or ```$&``` in Perl), which is the bit of the string that matches the pattern, while the entries at index 1 and onwards corresponds to ```\1```, ```\2```, etc. in a ```sed``` replacement pattern (or ```$1```, ```$2``` etc. in Perl), i.e. the bits matched by each parenthesis.
Example:
```string=$( date +%T )
if [[ "$string" =~ ^([0-9][0-9]):([0-9][0-9]):([0-9][0-9])$ ]]; then
printf 'Got %s, %s and %s\n' \
"${BASH_REMATCH[1]}" "${BASH_REMATCH[2]}" "${BASH_REMATCH[3]}"
fi
```
This may output
```Got 09, 19 and 14
```
if the current time happens to be 09:19:14.
The ```REMATCH``` bit of the ```BASH_REMATCH``` array name comes from "Regular Expression Match", i.e. "RE-Match".
In non-```bash``` Bourne-like shells, one may also use ```expr``` for limited regular expression matching (using only basic regular expressions).
A small example:
```$ string="hello 123 world"
$ expr "$string" : ".*[^0-9]\([0-9][0-9]*\)"
123
``` | {
"type": null,
"score": null,
"agent": null
} | null | {"n_characters": 24, "passed_quality_check": "False", "flesch_reading_ease": 115.13000000000002, "entropy": 0.16555487212102563} |
If I have a Bash script like:
```#!/bin/bash
f() {
# echo function name, "f" in this case
}
```
Is there any way to do this? This could be used in help messages such as
```printf "Usage: %s: blah blah blah \n" $(basename $0) >&2;
```
Only in this case what I wanted is not ```$0```, which is the file name of the script. | null | [] | From the Bash Reference Manual:
FUNCNAME
An array variable containing the names of all shell functions currently in the execution call stack. The element with index 0 is the name of any currently-executing shell function. The bottom-most element (the one with the highest index) is "main". This variable exists only when a shell function is executing. Assignments to FUNCNAME have no effect and return an error status. If FUNCNAME is unset, it loses its special properties, even if it is subsequently reset.
This variable can be used with BASH_LINENO and BASH_SOURCE. Each element of FUNCNAME has corresponding elements in BASH_LINENO and BASH_SOURCE to describe the call stack. For instance, ${FUNCNAME[$i]} was called from the file ${BASH_SOURCE[$i+1]} at line number ${BASH_LINENO[$i]}. The caller builtin displays the current call stack using this information.
When bash arrays are accessed without an index the first element of the array will be returned, so ```$FUNCNAME``` will work in simple cases to provide the name of the immediately current function, but it also contains all other functions in the call stack. For example:
```# in a file "foobar"
function foo {
echo foo
echo "In function $FUNCNAME: FUNCNAME=${FUNCNAME[*]}" >&2
}
function foobar {
echo "$(foo)bar"
echo "In function $FUNCNAME: FUNCNAME=${FUNCNAME[*]}" >&2
}
foobar
```
Will output:
```$ bash foobar
In function foo: FUNCNAME=foo foobar main
foobar
In function foobar: FUNCNAME=foobar main
``` | {
"type": null,
"score": null,
"agent": null
} | null | {"n_characters": 265, "passed_quality_check": "False", "flesch_reading_ease": 95.2117391304348, "entropy": 2.014735903705883} |
I know that the public practice of any religion other than Wahabbi Islam is strictly forbidden in Saudi Arabia, and there would be no places of worship. I also know that the morality police raided a a hotel several years ago where Mass was being celebrated, and arrested the priest and the acolytes.
But I am also told that many expats from countries with large Catholic communities such as the Philippines, India, and Sri Lanka do gather in private homes for worship. Is this officially tolerated, or would I endanger the hosts or other participants by asking about them? | null | [] | Essentially, there is no religious freedom in Saudi Arabia, and nobody is likely to openly change - indeed:
> conversion from Islam to another religion is considered apostasy and punishable by death.
All citizens are considered Muslim by the state.
In 2008, the Vatican tried to get a Catholic Church built in Saudi Arabia, and had discussions with the Saudi government, but the result was that it was not permitted.
So that brings us to the Christians (including Catholics) that DO exist in the country. There are considered to be more than a million Roman Catholics in the country, as well as from many others.
> Saudi Arabia allows Christians to enter the country as foreign workers for temporary work, but does not allow them to practice their faith openly. Because of that Christians generally only worship in secret within private homes. Items and articles belonging to religions other than Islam are prohibited. These include Bibles, crucifixes, statues, carvings, items with religious symbols, and others.
So, to your question about whether this secret practice is allowed or tolerated? Not so much. They have an entire 'government' organisation - The Saudi Arabian Mutaween (Arabic: مطوعين), or Committee for the Propagation of Virtue and the Prevention of Vice (i.e., the religious police) which prohibits the practice of any religion other than Islam.
The government also doesn't let clergy of other religions into the country for conducting religious services (naturally this means no Christian baptisms, weddings etc by official clergy legally).
And if they find someone practising in their homes? International Christian Concern (ICC) protested what it reported as the 2001 detention of 11 Christians in Saudi Arabia, for practicing their religion in their homes. In June 2004, ICC reported on what it termed a "pogrom-like" crackdown by Saudi police on Christians after media reports of Koran desecration in Guantanamo Bay. | {
"type": null,
"score": null,
"agent": null
} | null | {"n_characters": 475, "passed_quality_check": "True", "flesch_reading_ease": 60.48200000000003, "entropy": 3.297299149541011} |
Is there a way to achieve protections similar to "Copyleft" under the patent system? | null | [] | "Copyleft" is a specific application of copyright law (restricting the license to use copyrighted material by requiring that those obtaining it must also redistribute it for free / under specified terms).
Any work under "copyleft" is protected by copyright law (thanks to The Berne Convention copyright is "automatic" in most of the world), and those copyright protections are what allow the author to dictate the terms under which the copyrighted work is used.
As far as I'm aware there is no Berne Convention equivalent in patent law: in order for you to have patent protection and the ability to dictate how your invention is used (by licensing the patent for it to impose Copyleft-style requirements on those using it) you must file for (and receive) a patent.
Note that the sort of "keeping information free" protection the Copyleft folks would seek is inherent in US Patent Law and most other patent systems -- it is known as Prior Art.
In its most simplistic definition (cribbed from Wikipedia):
> Prior art …, in most systems of patent law, constitutes all information that has been made available to the public in any form before a given date that might be relevant to a patent's claims of originality. If an invention has been described in the prior art, a patent on that invention is not valid.
So if you have designed and publicly released something patentable (and can document the date of such a release to the satisfaction of the patent authorities) you can seek to invalidate any subsequent patent claims on the grounds that prior art exists. | {
"type": null,
"score": null,
"agent": null
} | null | {"n_characters": 71, "passed_quality_check": "True", "flesch_reading_ease": 59.68214285714288, "entropy": 0.4714662249415614} |
In monopoly, can an opponent put a property up for auction at a higher price than I have in cash? | null | [] | This is definitely allowed, and a good tactic to leverage a cash advantage.
There is no minimum or maximum price for an auction. The rules state:
Bidding may start at any price.
Monopoly auctions do not have "the bidding will start at $10". They just have "bidding is open". Once the auction begins, a bid by any player sets the minimum price for the auction at that bid. Your opponent could easily bid $10 on the property, forcing you to either let them have it for $10 or bid more than you have in cash. You could bid first, say starting the bid at $5, but that doesn't meaningfully change the situation, as they can just respond by out-bidding you at $10.
You may bid more money than you have in cash, but you are still obligated to pay it if you win the auction. This may be a reasonable play if you can afford it by mortgaging property and/or selling houses, especially to drive up the price for your opponent. But be careful not to bid more than you can pay after mortgages and sales, as if you win the auction and can't pay your bid, you go bankrupt. In particular, if you bid more than you can afford, you opponents can all just let you win the bid, at which point you go bankrupt and the property you just bought would go up for auction again (in addition to all your other properties). | {
"type": null,
"score": null,
"agent": null
} | null | {"n_characters": 78, "passed_quality_check": "True", "flesch_reading_ease": 76.55500000000004, "entropy": 0.5789175946681381} |
What is the purpose of having a countdown during a rocket launch? | null | [] | There is a window of opportunity for a launch to occur, known as a launch window. This certainly occurs when trying to get somewhere such as the International Space Station, but even earth orbiting satellites have a window of time where the rocket can meet its performance.
Bottom line is, the exact time of launch is very important, and a complex sequence of events have to happen in just the right order to make it happen. SpaceFlightNow provided a generic countdown timeline for the Falcon 9, I'll re-post a few of the highlights. Some of these things are done to keep the batteries topped off till the last minute. You don't want to put the fuel in until the last minute, to save it (Most rocket fuel is cryogenic). When the rocket has its fuel loaded, you want to keep non-essential personnel away from it, making it so any tasks they need to do is put off until the end. You don't want to arm the system to launch until you are completely ready, thus, it only happens right before. There are many more such examples of why things are done in a particular order, but hopefully this will give you a sense.
```L-13:30:00 Dragon Power Up
L-7:30:00 Launch Vehicle Power Up
L-3:50:00 Liquid Oxygen Loading
L-3:40:00 Rocket Propellant 1 Loading
L-1:00:00 Weather Briefing
T-0:09:40 Verify that Terminal Countdown has started
T-0:09:30 Merlin 1C: Lox Bleeder Valves Open
T-0:07:00 Dragon to Internal Power
T-0:04:46 Stage 1 & Stage 2 Auto Sequence starts
T-0:04:30 Transfer to Internal Power
T-0:04:10 Vehicle Release Auto Sequence
T-0:03:40 TEA-TEB Ignition System Activation
T-0:03:25 Flight Termination System to Internal Power
T-0:03:11 Flight Termination System Armed
T-0:03:02 LOX Topping Termination
T-0:03:00 Second Stage Thrust Vector Actuator Test
T-0:02:00 Range Verification
T-0:01:30 Final Engine Chilldown, Pre-Valves/Bleeders Open
T-0:01:00 Flight Computer to start-up
T-0:00:50 First Stage Thrust Vector Actuator Test
T-0:00:40 Propellant Tank Pressurization
T-0:00:20 All Tanks at Flight Pressure
T-0:00:15 Arm Pyrotechnics
T-0:00:03 Merlin Engine Ignition
T-0:00:00 LIFTOFF
``` | {
"type": null,
"score": null,
"agent": null
} | null | {"n_characters": 54, "passed_quality_check": "True", "flesch_reading_ease": 74.805, "entropy": 0.4122403825377015} |
How does one attack a two-time pad (i.e. one time pad with key reuse)?
I am new to cryptography and my problem is with two time pad attacks on OTP.
The problem I had in my course was that I have 10 ciphertexts encrypted with the same key $K$. I am then given another ciphertext that I should decrypt.
I know that XOR-ing two ciphers gives me the XOR of their original messages.
My question is what is the correct thing to do after that?
I tried to take 3 ciphertexts $C_1, C_2$ and $C_3$.
Then get $S_1 = C_1 \oplus C_2 \oplus $```' '```, also get $S_2 = C_1 \oplus C_3 \oplus$ ```' '```.
After that I compared all corresponding characters in $S_1$ and $S_2$,
and if $S_1[i] = S_2[i]$ then I calculate $S_1[i] \oplus C_2[i]$ to get $K[i]$.
I tried this on paper before coding and it worked, but I might be missing something.
Is this the right approach? Why does it work? | null | [] | The classical answer to "what is the correct thing to do after you have the XOR of the two original messages" is crib-dragging.
That is, you take a guess of a common phrase that may appear in one of the plaintexts (the classical example against ASCII english is the 5 letter " the "), and exclusive-or that against the XOR of the two original messages in various locations. If one of the plaintexts had the text of the crib (" the " in our example), then the result of the exclusive-or is what the other plaintext had in that position; if neither plaintext had that, it's likely that the result of the exclusive-or is just gibberish. And, once you have a plausible short section, you can extend it (for example, if you know that one of the plaintexts is " na**", you can go through the dictionary of all words that start with "na", use those as cribs, and see which makes the other plaintext make sense).
In addition, you can often deduce things directly from the bit pattern. For example, if the messages are in ASCII, then one thing to note that bit 6 of letters is set, but bit 6 of spaces, numbers and (most) punctuation is clear; because spaces are far more common than numbers and punctuation, then that will give you a good guess of where spaces occur in the texts (albeit without telling you which message a specific space appears in).
Now, if you have 11 messages all encrypted with the same pad (an "11-time pad"), things get even easier. Obviously, you can grab a crib across one message, and check it against the other 10; if it makes all 10 make sense, then it is almost certainly accurate. Even better, by using the observation that you can distinguish spaces from letters (again, by comparing bit 6), you can find where all the spaces appear in the messages; these all act like 1 character cribs, probably revealing virtually all the text of all the messages. | {
"type": null,
"score": null,
"agent": null
} | null | {"n_characters": 716, "passed_quality_check": "True", "flesch_reading_ease": 84.87444444444446, "entropy": 5.218665535331887} |
I have a small home automation lab (that I keep saying I'll expand, but haven't). In this setup, I have a control system to control lights (utilizing the x10 protocol), blinds, a Nest thermostat and two web cams.
With the recent record setting DDoS attacks utilizing unsecured IoT devices, I'd like to secure my small setup a bit.
What can a home user do to secure their network while still maintaining the "connect from anywhere" aspect that is a big part of the marketing? | null | [] | The absolute most common issue with IoT devices is default passwords. So change all the passwords. Pick a unique, random password for every device and write it down on paper (paper is safe from remote attackers and hard drive failures). 12 random (i.e. computer-generated) lowercase letters represent a good compromise between security and being hard to type. Each device should have a different password so that breaking one doesn't let the attacker break all of them. Enter the passwords in a password manager and use that password manager in the computers you use to control the devices.
If the device has different authorization channels, for example an administration password and a day-to-day usage password, use different passwords for both and only record the administration password on selected devices.
The second generic security measure is to ensure that all your devices are behind a firewall or at least a NAT device. A typical home router is sufficient, but you should turn off UPnP which can allow inadvertent back channels from the outside. The goal is to ensures that there's no direct way to connect from the Internet to the device. Connections should always go through a gateway that itself requires authentication to cross, and that you keep patched with any security updates.
You should also apply security updates on all the device if they exist at all, which can be a problem. | {
"type": null,
"score": null,
"agent": null
} | null | {"n_characters": 393, "passed_quality_check": "True", "flesch_reading_ease": 70.90087209302328, "entropy": 3.389792711087675} |
What are objective advantages or disadvantages of using the markup language LaTeX instead of a WYSIWYG word processor like MS Word or LibreOffice Writer?
Please use objective arguments. | null | [] | LaTeX is better at:
* Dealing with mathematical notation. Layout and entry are generally easier using LaTeX than some other sort of equation editor.
* Consistent handling of intra-document references and bibliography. As of a couple of years ago the major WYSIWYG editors still had problems with re-numbering cross-references and bibliography items. This is never a problem with BibTeX or LaTeX.
* Separation of content and style. In principle this means that you can write your document without caring how it is formatted, and at the end of the day wrap it in the style-file provided by the journal publisher before submission to conform to the house style. In practice some of the journal publishers demand special formatting commands that partially moots this process. Furthermore recent versions of Word and LibreOffice Writer, when properly used, should be able to keep track of various levels of section heading separate from the body text, and apply uniform styling to each level. The gap is somewhat closing.
* Tables and illustrations. With PSTricks or TikZ, one can produce high quality illustrations within the document (though the learning curve is a bit steep there). And I've found LaTeX to be better at preparing complex tables.
WYSIWYG (especially Word and Writer) is better at:
* Collaborative editing. Without using an online site for collaborative LaTeX editing (such as ShareLaTeX), working collaboratively on a LaTeX file ideally requires some sort of separate revision control software. Word and Writer have very good comments/annotations and edit-tracking features. When a large number of authors are commenting on the writing of one file, this can be very useful.
* Spell check. Admittedly most text editors one uses to edit TeX files also do spell check. But this is generally conveniently built into WYSIWYG editors.
* Compatibility. Unless you work in mathematics, computer science, or physics (and sometimes even if you work in those fields), it is more likely that your collaborators will know what to do with a Word or Writer file than a LaTeX file.
* Minimum barrier to entry. If you just care about getting the ideas down on paper, you can use a WYSIWYG editor just like a typewriter. Sure, it may be tedious to fix the formatting later on, compared to LaTeX where one need to first figure out how to setup a bare-minimum file before using, it may be an attractive point.
A wash:
* Most journals provide templates for both Word and LaTeX, so there's no real winner there.
Suggestion: if you want to convince someone to start using LaTeX, start them out first in one of the WYSIWYG environments designed for LaTeX first (for example LyX). This will help somewhat to ameliorate the scary entry barrier. | {
"type": null,
"score": null,
"agent": null
} | null | {"n_characters": 159, "passed_quality_check": "True", "flesch_reading_ease": 40.2896551724138, "entropy": 0.5974236452573322} |
Could Gandalf not have made his own One Ring? | null | [] | Possibly
In the foreword to The Lord of the Rings, Tolkien observes:
Saruman, failing to get possession of the Ring, would in the confusion and treacheries of the time have found in Mordor the missing links in his own researches into Ring-lore, and before long he would have made a Great Ring of his own with which to challenge the self-styled Ruler of Middle-earth.
Tolkien therefore definitely conceives of Saruman as being capable of making a Ring, and therefore Gandalf would presumably have been likewise capable.
However, what we can also take from that statement is that making a Ring is not a simple task; there is some "lore" involved, and we know from elsewhere that Saruman had made especial study of the Rings of Power, but yet Tolkien states that even he would have required knowledge found only in Mordor to complete his own work.
So, to summarize:
* Yes, but,
* It's neither quick nor easy, because,
* There is considerable "lore" involved, and,
* It requires hundreds or even thousands of years of dedicated study to get to a "Saruman-level" of "lore", and,
* Even then there will be missing knowledge which may only be found in Mordor.
* So for all practical purposes it's actually "No".
| {
"type": null,
"score": null,
"agent": null
} | null | {"n_characters": 37, "passed_quality_check": "True", "flesch_reading_ease": 103.70000000000002, "entropy": 0.10745644052596023} |
It’s the year 2018, and you live in the good ol’ North American landmass. The fascist landmass. By this year, the dystopian N.A.F party controls all of the landmass and secret police prowl the streets armed with automatic rifles. Protest the rules and NAF makes you disappear -- permanently.
Onto the subject
As you’ve seen in a lot of movies and whatnot, dystopian governments like to make people fit into a mandatory dress code. 1984 did it, a lot of other dystopian media did it, and so on. I plan to do the same, but I want to make my dystopian government a logical one, that only does what’s necessary to keep power. What is a logical reason why mandatory dress codes would be forced upon citizens? | null | [] | **Control the Supply**
The citizens of your dystopia work in a government-controlled factory. They shop in a government-controlled shop with government-controlled prices. What does the shop have in stock? The products of more government-controlled factories, of course.
It is not illegal to wear non-standard clothes. It is illegal to buy or sell them (that would be black marketeering) and it is usually illegal to make them (that would be diversion of government-supplied raw materials). One might "repair" government-pattern clothes into non-standard shapes, e.g. cutting trousers short or making a skirt out of an old bedsheet.
Few people make their own clothing. A sure sign they have too much time on their hands, why don't they join in a government-sponsored community activity instead?
Of course there is "special store No. 0815" which is open to senior regime officials and their families. These officials work so long hours in their selfless service of the people that they cannot possibly stand in the waiting line of a normal store. The special store opens by appointment, e.g. right between the parade for the Dear Leader's birthday and the official reception. And it stocks suitable clothes for that reception. | {
"type": null,
"score": null,
"agent": null
} | null | {"n_characters": 579, "passed_quality_check": "True", "flesch_reading_ease": 77.84490935114506, "entropy": 5.015986691618641} |
When would one use an impact driver versus a regular drill? | null | [] | Drills and impact drivers are similar in that they're both rotating power tools and they have some overlap in usage, but they work differently and have different strengths.
Drills apply a constant torque and tend to have good control across a range of speeds and clutch settings. They're good for work that requires care or precision, like drilling a hole or driving a a smaller screw.
Impact drivers provide much higher torque but in rapid bursts–very much like a hammer provides bursts of linear force.
This makes them good for heavy-duty jobs like driving long lag bolts (you can even take the lug nuts off your car wheels with most impact drivers), but the banging action makes them a little bit difficult to control. They're also not good for driving into something very soft–between the hammer strikes there's no torque applied and the bit can actually rotate backwards. You can see this by holding the chuck with your hands while the driver is on; you'll feel the pulsing of the strikes but chuck won't actually rotate. Finally, some impact drivers don't have a clutch, which means you need to be very careful if you're driving something that could be over-tightened or broken by the high torque.
Besides the higher torque, impact drivers have another huge advantage in that you don't have to hold them tightly to provide the torque. The momentum of the rotating striker inside is what provides the rotation force, so even when you're driving something very tough it's quite easy to hold the tool. Drills on the other hand provide a constant torque and require the user to provide the reaction and prevent the tool from spinning.
If you have to pick a single all-purpose tool for general DIY use, a drill is much easier to control and is useful in a wider range of situations. | {
"type": null,
"score": null,
"agent": null
} | null | {"n_characters": 49, "passed_quality_check": "True", "flesch_reading_ease": 87.99727272727276, "entropy": 0.16732290765131647} |
Alternative to Windows Snipping Tool for Mac OSX | null | [] | Mac OS has a built-in snipping tool similar to the one in Windows:
* ⌘+⇧+3 captures the whole screen
* ⌘+⇧+4 captures a custom rectangle
(click and drag over the screen region you want to capture)
* ⌘+⇧+4 then space captures a specific window (move the mouse cursor over the desired window, then click)
Press esc to cancel.
Screenshots are saved onto your Desktop and timestamped.
Holding control in addition to any of the sequences above causes the image to be copied instead of saved to the desktop.
By default the image format is png. However, you can change the format by typing in the Terminal:
```defaults write com.apple.screencapture type image_format
killall SystemUIServer
```
Where ```image_format``` is one of ```jpg```, ```tiff```, ```pdf```, ```png```, ```bmp``` or ```pict``` (among others). If you omit the second line, you will need to log out and in again for the change to take effect.
The settings for the shortcuts in the system preferences are located at:
System Preferences → Keyboard → Keyboard Shortcuts → Screen Shots
The MacOS X application that enables screenshots is called Grab. It is located at ```/Applications/Utilities/Grab.app``` | {
"type": null,
"score": null,
"agent": null
} | null | {"n_characters": 41, "passed_quality_check": "False", "flesch_reading_ease": 61.24000000000001, "entropy": 0.11938848631606627} |
What is the difference between kerning vs. letter spacing? | null | [] | Any font has built-in spacing determined by the “side bearing” of each character. In metal type, the side bearing is the physical right or left edge of the individual piece of type that determines its spacing from the characters on either side. Digital fonts mimic this in the basic design process.
* “To kern” means to adjust the spacing between a pair of letters, numerals, punctuation, etc. (properly “glyphs”) so that their spacing looks correct. The letters AV, for example, have to overlap in any usual typeface, otherwise at least one of them ends up looking lost and bewildered. Their kerning is tightened to snug them up to one another. An italic f will collide with a following question or quote mark in some fonts, so the kerning must be opened up from the default spacing.
An ancient scribe could kern effortlessly with pen and ink, and if you read old manuscripts you'll see lots of places where a letter intrudes into the space of the one next to it. With metal type, it wasn't so easy. Kerning required cutting away some of the metal base, or table, of the character. The amount of time, skill and work involved is left as an exercise for readers whose minds are in need of a little boggling. The shapes of the letters in metal type intended for long passages of text such as books, or especially newspapers, were designed to minimize the amount of kerning that would be necessary.
Kerning tables built into a digital font assign preset kerning values to specific letter pairs, according to the designer's best estimate of how they should look. There can be a dozen or so pairs (or none!) in a cheapo font, thousands in a high-end one. No matter how thoroughly and well a kerning table is built, though, there will still be situations where some awkward combination requires that the kerning be loosened or tightened from the preset values of the glyphs.
* Letter spacing (often “tracking” in software applications) adjusts the spacing between all the glyphs in a piece of text. This can help to make a page look a little more open and inviting, for example, especially with a similarly open leading. Very large type, such as a big headline, almost always benefits from tightening the tracking. Tiny type, such as in captions or footnotes, is made more readable by opening the letter spacing a bit, especially if the paper is absorbent and will allow the ink to spread a little.
A final note: Use caution with both kerning and tracking. Tracking is very often wildly overdone, kerning is seldom done enough. | {
"type": null,
"score": null,
"agent": null
} | null | {"n_characters": 50, "passed_quality_check": "True", "flesch_reading_ease": 66.10000000000002, "entropy": 0.1971034730390699} |
I read somewhere that C♯ and D♭ actually differ 41 by cents from each other. As far as I know, there should be 2 semitones between C and D. Moreover, C♯ is one semitone above C and D♭ is one semitone below D. Therefore, C♯ and D♭ should be equivalent. If so, how can C♯ and D♭ actually differ by 41 cents from each other? | null | [] | When we talk about the exact frequencies of each pitch class, we have to know the temperament, and a reference pitch. For example, 12-tone equal temperament (12TET) with A4=440Hz is a standard in modern music. From those two parameters, we can extrapolate the exact frequency of every possible note.
12TET is nearly ubiquitous nowadays (at least in Western music), but it doesn't sound as clean as Just Intonation (JI). In essence, 12TET has made every key sound equally imperfect. JI creates a scale where the intervals in the primary chords are all very nice simple ratios, and so the chords ring out very cleanly, but it only works in that key. Important note: within a given JI tuning, each of the 12 pitch classes still has only one single frequency. There is no difference between C♯ and D♭ in, say, "Pythagorean tuning based on A, with A=440Hz".
But most music doesn't stay in one key. While a piano can't make pitch adjustments on the fly (which is why we've agreed to use 12TET for it), most instruments in an orchestra can. So when the piece is in A major, the orchestra will use JI and adjust C♯ to be a little flatter than it would be if using 12TET. But then if the piece modulates to F♯ minor, they'll start playing it slightly sharp.
When people say that C♯ is not the same as D♭, what they really mean (whether they realize it or not) is that context may make for different micro-adjustments. In C major, a C♯ might be the third of an A major chord, perhaps a secondary dominant of the ii chord, while D♭ might be the root of the Neapolitan chord. These would result in different tuning choices. | {
"type": null,
"score": null,
"agent": null
} | null | {"n_characters": 257, "passed_quality_check": "False", "flesch_reading_ease": 82.95086956521742, "entropy": 1.7259581563982016} |
Not sure if this is a Mozilla-specific JS syntax, but I often found variables being declared this way, for example, in add-on SDK docs:
```var { Hotkey } = require("sdk/hotkeys");
```
and in various chrome Javascript (```let``` statement is being used in place of ```var```),
```let { classes: Cc, interfaces: Ci, results: Cr, utils: Cu } = Components;
```
I found it very confusing but I am not being able to find any documentation about both syntax, even on MDN. | null | [] | They're both JavaScript 1.7 features. The first one is block-level variables:
```let``` allows you to declare variables, limiting its scope to the block, statement, or expression on which it is used. This is unlike the ```var``` keyword, which defines a variable globally, or locally to an entire function regardless of block scope.
The second one is called destructuring:
Destructuring assignment makes it possible to extract data from arrays or objects using a syntax that mirrors the construction of array and object literals.
...
One particularly useful thing you can do with destructuring assignment is to read an entire structure in a single statement, although there are a number of interesting things you can do with them, as shown in the section full of examples that follows.
For those familiar with Python, it's similar to this syntax:
```>>> a, (b, c) = (1, (2, 3))
>>> a, b, c
(1, 2, 3)
```
The first code chunk is shorthand for:
```var {Hotkey: Hotkey} = require("sdk/hotkeys");
// Or
var Hotkey = require("sdk/hotkeys").Hotkey;
```
You can rewrite the second code chunk as:
```let Cc = Components.classes;
let Ci = Components.interfaces;
let Cr = Components.results;
let Cu = Components.utils;
``` | {
"type": null,
"score": null,
"agent": null
} | null | {"n_characters": 391, "passed_quality_check": "False", "flesch_reading_ease": 45.491666666666674, "entropy": 2.5826703963983153} |
When reading some documentation about the security of a product, I found that the vendor uses the SHA-2 of a password to encrypt data (AES-256), instead of using this password directly.
Are there any advantages of doing so?
An attacker is not going to crack the encrypted data using this SHA-2-as-a-password key but rather exhaust the password keyspace (if feasible) and try its hash. Therefore the only reason I can think of is that there is an extra computational step (the creation of the hash). I would have rather increased the password entropy if the point is to computationally complexify the attack. | null | [] | It sounds like a primitive version of a key derivation function (KDF), in particular they probably could have avoided reinventing the wheel by using PBKDF2.
There are several reasons why you don't want to use the password directly as an AES key.
* To distribute the bits. The main property here is that a hash function's output is, statistically speaking, uniformly distributed. People tend to pick passwords that aren't fully random, in particular, most passwords would only contain characters you can type in a keyboard. When used as an encryption key, a non-statistically random key may expose weaknesses in the encryption function.
* To fit the keys to the encryption key length. Most passwords are going to be either longer or shorter than the key space of the encryption function. By hashing your password, the exact key length will be exactly the size of the input key of your encryption function. While the entropy of the derived key doesn't increase, this avoids the likelihood of exposing weakness in the encryption function if you just simply zero pad the password or worse truncate the password.
* To slow down key derivation decryption. Per your description, the software is only using a single SHA256 round, which is not much. But with proper password based KDF, like PBKDF2, there are usually tens of thousands or hundreds of thousands of rounds of the underlying hash function. This slows down computing the keys, increasing the effective strength of passwords without increasing its length.
* To keep the user's plain text password out of memory, thus preventing it from being accidentally dumped to disk during hibernation or crash dump. While this wouldn't protect the hash from being used to decrypt the data you're encrypting, it will prevent the password from being reused to decrypt other files (which presumably uses different salt) or being tried on your online accounts or other devices that you use.
| {
"type": null,
"score": null,
"agent": null
} | null | {"n_characters": 508, "passed_quality_check": "True", "flesch_reading_ease": 62.562598130841124, "entropy": 4.0637659063274985} |
My online friend is asking for money in order to visit my home country. Is this a legit request or a scam? | null | [] | The first rule for every such question is simple:
If you have to ask, it's probably a scam
While we cannot possibly examine every possible situation from every angle, online scams are simply far too common to trust someone you have known only on the Internet or even what appears to be a real friend or relative, who may have had their email accounts hacked or might be simply being impersonated. So the basic advice is: don't send out your private details, don't pay for anything and don't send any money for any purposes.
Some common examples are:
* love interest who wants to meet you.
* 'supposed' friend in need.
* prize/lottery.
* inheritance of some distant relative.
* person in distress asking you to take care of their financial assets.
* some government/private fine/tax/loan/offer scam
If you know the person who seems to have send the email asking for money in person, contact them in an other way or have them answer a personal question only they know the answer to, before you accept their email as real.
In many countries there are organisations to whom you can report internet fraud or suspected internet fraud. They also offer advice about common scams and how to recognise them.
In the USA: The IC3 (part of the FBI) (further information from the FBI on romance scams)
In Canada: The Canadian Anti-Fraud Centre
In the UK: Action Fraud
The second rule for every such question is also simple:
google this requirement
A legitimate requirement, if exist, would be found on the relevant official sources, including the airline or Embassy web sites. For example, Indian nationals are required to show USD 500/week to enter Thailand.
Many countries also require visa applicants to show means of financial resources. However cash is rarely accepted as such evidence (since one can just borrow it for a day), and depositing this cash in applicant's bank account a day before applying for a visa usually backfires. | {
"type": null,
"score": null,
"agent": null
} | null | {"n_characters": 85, "passed_quality_check": "True", "flesch_reading_ease": 84.15181818181821, "entropy": 0.6337694343406497} |
When converting from RGB to grayscale, it is said that specific weights to channels R, G, and B ought to be applied. These weights are: 0.2989, 0.5870, 0.1140.
It is said that the reason for this is different human perception/sensibility towards these three colors. Sometimes it is also said these are the values used to compute NTSC signal.
However, I didn't find a good reference for this on the web. What is the source of these values? | null | [] | If you convert RGB -> grayscale with slightly different numbers / different methods, you won't see much difference at all on a normal computer screen under normal lighting conditions -- try it.
### RGB -> ^gamma -> Y -> L*
In color science, the common RGB values, as in html rgb( 10%, 20%, 30% ),
are called "nonlinear" or
Gamma corrected.
"Linear" values are defined as
```Rlin = R^gamma, Glin = G^gamma, Blin = B^gamma
```
where gamma is 2.2 for many PCs.
The usual R G B are sometimes written as R' G' B' (R' = Rlin ^ (1/gamma))
(purists tongue-click) but here I'll drop the '.
Brightness on a CRT display is proportional to RGBlin = RGB ^ gamma,
so 50% gray on a CRT is quite dark: .5 ^ 2.2 = 22% of maximum brightness.
(LCD displays are more complex;
furthermore, some graphics cards compensate for gamma.)
To get the measure of lightness called ```L*``` from RGB,
first divide R G B by 255, and compute
```Y = .2126 * R^gamma + .7152 * G^gamma + .0722 * B^gamma
```
This is ```Y``` in XYZ color space; it is a measure of color "luminance".
(The real formulas are not exactly x^gamma, but close;
stick with x^gamma for a first pass.)
Finally,
```L* = 116 * Y ^ 1/3 - 16
```
"... aspires to perceptual uniformity [and] closely matches human perception of lightness." --
Wikipedia Lab color space | {
"type": null,
"score": null,
"agent": null
} | null | {"n_characters": 364, "passed_quality_check": "True", "flesch_reading_ease": 81.20666666666669, "entropy": 3.3393056673699983} |
¿Cuál es la diferencia entre echo, print, print_r, var_dump y var_export en PHP? | null | [] | Aquí hay una descripción general de estas funciones de PHP:
###echo
* Muestra una o más cadenas separadas por comas
* No tiene un valor de retorno
* Ejemplo:
```echo "Cadena 1", "Cadena 2";
```
###print
* Muestra solo una simple cadena
* Devuelve 1, por lo cual puede ser usada en una expresión
* Ejemplos:
```print "Hello";
if($expresion && print "Cadena"){
// Hacer algo
}
```
###print_r()
* Muestra una representación más entendible de un solo valor cualquiera
* No solo acepta cadenas, también acepta arreglos y objetos formateándolos para ser visualizados de una forma más entendible
* Puede devolver su valor de salida como un valor de retorno si le pasa ```true``` como su segundo argumento
* Útil para la depuración
Ejemplo: Si tenemos el siguiente arreglo:
```$a = array ('a' => 'manzana', 'b' => 'banana', 'c' => array ('x', 'y', 'z'));
print_r ($a);
```
Nuestro ```print_r($a);``` nos devolverá algo como esto:
```Array
(
[a] => manzana
[b] => banana
[c] => Array
(
[0] => x
[1] => y
[2] => z
)
)
```
###var_dump()
* Muestra una representación más entendible de un valor cualquiera o más separados por comas
* No solo acepta cadenas, también acepta arreglos y objetos formateándolos para ser visualizados de una forma más entendible
* Usa un formato diferente al anterior ```print_r()```, por ejemplo, también muestra el tipo del valor
* Útil para la depuración
* No tiene un valor de retorno
Ejemplo: Si tenemos el siguiente arreglo:
```$a = array(1, 2, array("a", "b", "c"));
var_dump($a);
```
Nuestro ```var_dump($a);``` nos devolverá algo como esto: (nótese que nos muestra el tipo de valor de cada elemento en nuestro arreglo)
```array(3) {
[0]=>
int(1)
[1]=>
int(2)
[2]=>
array(3) {
[0]=>
string(1) "a"
[1]=>
string(1) "b"
[2]=>
string(1) "c"
}
}
```
###var_export()
* Muestra una representación más entendible y ejecutable de un valor cualquiera
* No solo acepta cadenas, también acepta arreglos y objetos formateándolos para ser visualizados de una forma más entendible
* Usa un formato de salida diferente de ```var_dump()``` y ```print_r()```, la salida es un código de PHP válido
* Útil para la depuración
* Puede devolver su valor de salida como un valor de retorno si le pasa ```true``` como su segundo argumento
Ejemplo, si realizamos lo siguiente
```class A {
public $var;
}
$a = new A;
$a->var = 5;
var_export($a);
```
Obtendremos como resultado:
```A::__set_state(array(
'var' => 5,
))
```
###Notas
* ```echo``` es mucho más rápido que ```print```
* ```echo``` y ```print``` son constructores de lenguaje
* ```print_r()``` y ```var_dump()``` son funciones
* ```echo``` y ```print``` pueden recibir sus argumentos mediante paréntesis como una función ```echo("Hello", "world");``` ó ```print("Hello World");```
| {
"type": null,
"score": null,
"agent": null
} | null | {"n_characters": 68, "passed_quality_check": "False", "flesch_reading_ease": 76.50153846153849, "entropy": 0.3619944268243902} |
In Android, I defined an ```ImageView```'s ```layout_width``` to be ```fill_parent``` (which takes up the full width of the phone).
If the image I put to ```ImageView``` is bigger than the ```layout_width```, Android will scale it, right? But what about the height? When Android scales the image, will it keep the aspect ratio?
What I find out is that there is some white space at the top and bottom of the ```ImageView``` when Android scales an image which is bigger than the ```ImageView```. Is that true? If yes, how can I eliminate that white space? | null | [] | * Yes, by default Android will scale your image down to fit the ImageView, maintaining the aspect ratio. However, make sure you're setting the image to the ImageView using ```android:src="..."``` rather than ```android:background="..."```. ```src=``` makes it scale the image maintaining aspect ratio, but ```background=``` makes it scale and distort the image to make it fit exactly to the size of the ImageView. (You can use a background and a source at the same time though, which can be useful for things like displaying a frame around the main image, using just one ImageView.)
* You should also see ```android:adjustViewBounds``` to make the ImageView resize itself to fit the rescaled image. For example, if you have a rectangular image in what would normally be a square ImageView, adjustViewBounds=true will make it resize the ImageView to be rectangular as well. This then affects how other Views are laid out around the ImageView.
Then as Samuh wrote, you can change the way it default scales images using the ```android:scaleType``` parameter.
By the way, the easiest way to discover how this works would simply have been to experiment a bit yourself! Just remember to look at the layouts in the emulator itself (or an actual phone) as the preview in Eclipse is usually wrong.
| {
"type": null,
"score": null,
"agent": null
} | null | {"n_characters": 462, "passed_quality_check": "False", "flesch_reading_ease": 88.73804347826089, "entropy": 3.6552351858708674} |
I want to compare 2 arrays of objects in JavaScript code. The objects have 8 total properties, but each object will not have a value for each, and the arrays are never going to be any larger than 8 items each, so maybe the brute force method of traversing each and then looking at the values of the 8 properties is the easiest way to do what I want to do, but before implementing, I wanted to see if anyone had a more elegant solution. Any thoughts? | null | [] | As serialization doesn't work generally (only when the order of properties matches: ```JSON.stringify({a:1,b:2}) !== JSON.stringify({b:2,a:1})```) you have to check the count of properties and compare each property as well:
```const objectsEqual = (o1, o2) =>
Object.keys(o1).length === Object.keys(o2).length
&& Object.keys(o1).every(p => o1[p] === o2[p]);
const obj1 = { name: 'John', age: 33};
const obj2 = { age: 33, name: 'John' };
const obj3 = { name: 'John', age: 45 };
console.log(objectsEqual(obj1, obj2)); // true
console.log(objectsEqual(obj1, obj3)); // false```
If you need a deep comparison, you can call the function recursively:
```const obj1 = { name: 'John', age: 33, info: { married: true, hobbies: ['sport', 'art'] } };
const obj2 = { age: 33, name: 'John', info: { hobbies: ['sport', 'art'], married: true } };
const obj3 = { name: 'John', age: 33 };
const objectsEqual = (o1, o2) =>
typeof o1 === 'object' && Object.keys(o1).length > 0
? Object.keys(o1).length === Object.keys(o2).length
&& Object.keys(o1).every(p => objectsEqual(o1[p], o2[p]))
: o1 === o2;
console.log(objectsEqual(obj1, obj2)); // true
console.log(objectsEqual(obj1, obj3)); // false```
Then it's easy to use this function to compare objects in arrays:
```const arr1 = [obj1, obj1];
const arr2 = [obj1, obj2];
const arr3 = [obj1, obj3];
const arraysEqual = (a1, a2) =>
a1.length === a2.length && a1.every((o, idx) => objectsEqual(o, a2[idx]));
console.log(arraysEqual(arr1, arr2)); // true
console.log(arraysEqual(arr1, arr3)); // false
``` | {
"type": null,
"score": null,
"agent": null
} | null | {"n_characters": 363, "passed_quality_check": "True", "flesch_reading_ease": 65.54348484848487, "entropy": 3.15940274069549} |
I'm working with another developer on a project, and we're using Github as our remote repo. I'm on a Mac using git 1.7.7.3, he's on Windows using git 1.7.6.
This is what's happening
* One of us (let's call him developer A, but it doesn't matter which one) pushes a set of commits to GitHub.
* The other (developer B) makes some local commits.
* B does a ```git pull```.
* B does a ```git push```.
* Looking at the commit history log, I see Merge branch 'master' of github.com:foo/bar
The commit log gets littered with "Merge branch" messages over time, and also shows developer B as committing changes that developer A made. The only way we've found to prevent this issue has been to do a ```git pull --rebase``` at step 3, but I don't know what side effects rebasing will introduce. This is my first time working on a multi-developer git repo, so is this just normal behavior? Any thoughts on how to solve this issue? | null | [] | The commit you are seeing is perfectly fine. A ```pull``` effectively runs ```git fetch``` and then ```git merge``` so a merge is usually happening when you run ```git pull```.
The alternative to use rebasing instead of merging is possible, but usually you should avoid it. Rebasing allows you to keep a linear history, but also removes any information about the branching that originally happened. It will also cause the history of the current branch being rewritten, recreating all commits that are not contained in the target branch (in your case, the remote). As the recreated commits are different commits, this can cause a lot of confusion when developing together with others, especially when people already checked out parts of those commits before they get rewritten (for example with feature branches). So as a rule of thumb, you should never rewrite any commit that was already pushed.
The commits you see are there to combine two (or more) branches. It is perfectly fine to have a commit that does nothing else then merging multiple branches. In fact it makes it very clear when you have a merge commit that combines branches when looking at the history. In comparison to rebasing, merging also allows you to effectively see the original history as it was developed, including the actual branches that coexisted.
So, long story short: Yes, having merge commits is perfectly fine and you should not worry about them. | {
"type": null,
"score": null,
"agent": null
} | null | {"n_characters": 760, "passed_quality_check": "True", "flesch_reading_ease": 83.93298647469462, "entropy": 5.561766460052123} |
You are asleep on your boat on open sea. When you wake up you discover you have been robbed. A quick inspection of security cam footage reveals that the pirates who robbed you left your ship exactly an hour ago. The sea is flat, extends indefinitely, and is fully covered in a thick persistent fog. You have no idea in which direction the pirates fled. But you do know that these pirates always continue in a straight line at full speed away from their victim. Their maximum speed on open water is 20 nautical miles per hour. Your boat can reach 21 nautical miles per hour.
How do you catch the pirates? | null | [] | If we assume the ocean is flat and extends indefinitely in all directions, there is a strategy that guarantees we can catch the pirates in at most 800,000 years.
Put our current location as the origin of a coordinate system. We will describe our position in polar coordinates, as a function of time: $(r(t),\theta(t))$ (where we have arbitrarily chosen a direction to be $\theta=0$, and $t=0$ is when we realized we had been robbed).
We begin by traveling in the $\theta=0$ direction for 20 hours, putting our position at $(420, 0)$. We are then the same distance from the origin as the pirates. Next, we will travel in a spiral, in a manner so that $r'(t)=20$ at all times. This guarantees we will always be the same distance from the origin as the pirates. For $t\geq 20$, we will have $r(t)=420+20(t-20)=20t+20$.
Our speed is
$$
\sqrt{(r')^2+r^2(\theta')^2} = 21\text{ mph},
$$
and $r'(t)=20$ for $t>20$, so
$$
\theta'(t)=\sqrt{\frac{41}{r^2}}=\frac{\sqrt{41}}{20+20t}.
$$
If there is a $t\geq 20$ for which $\theta(t)$ is the angle in which the pirates fled, we will catch them. This means we will certainly catch the pirates by the time $\theta$ has increased from $0$ to $2\pi$. If $t_0$ is the time this happens, we have
$$
2\pi=\int_{20}^{t_0}\theta'(t)\,dt=\int_{20}^{t_0}\frac{\sqrt{41}}{20+20t}dt.
$$
Solving for $t_0$ gives
$$
t_0=21\mathrm{exp}\left(\frac{40\pi}{\sqrt{41}}\right)-1\approx 7,005,043,026.
$$
This means we can catch the pirates in at most 7,005,043,026 hours, or about 800 millennia. Better later than never! | {
"type": null,
"score": null,
"agent": null
} | null | {"n_characters": 493, "passed_quality_check": "True", "flesch_reading_ease": 85.53358898721733, "entropy": 3.236063297840618} |
Was ist der Unterschied zwischen den Fragewörtern "wieso", "weshalb" und "warum"? | null | [] | Es gibt im heutigen Deutschen keinen semantischen Unterschied zwischen den drei Frageadverbien (genauer: Kausalfrageadverbien) warum, weshalb und wieso. Die drei Wörter sind also Synonyme und somit beliebig gegeneinander austauschbar.
Wenn man sehr feinsinnig ist, könnte man im Ursprung der drei Wörter folgende Unterscheidung finden:
* warum als Frage nach dem Grund oder dem Motiv
Warum freut sich die Frau?
Weil sie im Lotto gewonnen hat.
* wieso als Frage nach der Ursache
Wieso fällt ein Apfel auf den Boden?
Weil er der Schwerkraft ausgesetzt ist.
* weshalb als Frage nach dem Zweck
Weshalb besucht er die Schule?
Weil er etwas lernen möchte.
Etymologie
* warum
bezeugt im Althochdeutschen wār umbe über Mittelhochdeutschen warumbe aus wār wo und umbe um. Gleichgestellt zu worum; Quelle: Grimmsches Wörterbuch
* wieso
Übertragung aus dem lateinischen qui sic? → wie so? alt. Ital. per che?, modern Ital.: perché? = per che ragione? ("aus welchem Grund"); Quelle: Grimmsches Wörterbuch
* weshalb
aus welchem Grunde – Verschmelzung der Präposition halb(en) (wegen) mit dem vorangestellten Genitiv des Pronomens was; Quelle: Grimmsches Wörterbuch
Allerdings sind die Grenzen zwischen den Begriffen häufig verwischt und in der Alltagssprache wird diese Unterscheidung kaum wahrgenommen. | {
"type": null,
"score": null,
"agent": null
} | null | {"n_characters": 71, "passed_quality_check": "False", "flesch_reading_ease": 72.61545454545457, "entropy": 0.34013973088496957} |