text
large_stringlengths
1
3.58k
length
int64
1
3.58k
page_title
large_stringlengths
3
128
url
large_stringlengths
33
158
text_id
int64
0
102k
paragraph_idx
int64
0
509
year
int64
2.02k
2.02k
month
int64
8
8
day
int64
10
10
hour
int64
0
0
minute
int64
30
54
second
int64
0
59
num_planck_labels
int64
0
13
planck_labels
large_stringclasses
84 values
While modelling atoms in isolation may not seem realistic, if one considers atoms in a gas or plasma then the time-scales for atom-atom interactions are huge in comparison to the atomic processes that are generally considered. This means that the individual atoms can be treated as if each were in isolation, as the vast majority of the time they are. By this consideration, atomic physics provides the underlying theory in plasma physics and atmospheric physics, even though both deal with very large numbers of atoms.
519
Atomic_physics
https://en.wikipedia.org/wiki/Atomic_physics
200
4
2,024
8
10
0
30
58
0
Electrons form notional shells around the nucleus. These are normally in a ground state but can be excited by the absorption of energy from light (photons), magnetic fields, or interaction with a colliding particle (typically ions or other electrons).
251
Atomic_physics
https://en.wikipedia.org/wiki/Atomic_physics
201
5
2,024
8
10
0
30
58
0
Electrons that populate a shell are said to be in a bound state. The energy necessary to remove an electron from its shell (taking it to infinity) is called the binding energy. Any quantity of energy absorbed by the electron in excess of this amount is converted to kinetic energy according to the conservation of energy. The atom is said to have undergone the process of ionization.
383
Atomic_physics
https://en.wikipedia.org/wiki/Atomic_physics
202
6
2,024
8
10
0
30
58
0
If the electron absorbs a quantity of energy less than the binding energy, it will be transferred to an excited state. After a certain time, the electron in an excited state will "jump" (undergo a transition) to a lower state. In a neutral atom, the system will emit a photon of the difference in energy, since energy is conserved.
331
Atomic_physics
https://en.wikipedia.org/wiki/Atomic_physics
203
7
2,024
8
10
0
30
58
0
If an inner electron has absorbed more than the binding energy (so that the atom ionizes), then a more outer electron may undergo a transition to fill the inner orbital. In this case, a visible photon or a characteristic X-ray is emitted, or a phenomenon known as the Auger effect may take place, where the released energy is transferred to another bound electron, causing it to go into the continuum. The Auger effect allows one to multiply ionize an atom with a single photon.
478
Atomic_physics
https://en.wikipedia.org/wiki/Atomic_physics
204
8
2,024
8
10
0
30
58
0
There are rather strict selection rules as to the electronic configurations that can be reached by excitation by light — however, there are no such rules for excitation by collision processes.
192
Atomic_physics
https://en.wikipedia.org/wiki/Atomic_physics
205
9
2,024
8
10
0
30
58
0
One of the earliest steps towards atomic physics was the recognition that matter was composed of atoms. It forms a part of the texts written in 6th century BC to 2nd century BC, such as those of Democritus or Vaiśeṣika Sūtra written by Kaṇāda. This theory was later developed in the modern sense of the basic unit of a chemical element by the British chemist and physicist John Dalton in the 18th century. At this stage, it wasn't clear what atoms were, although they could be described and classified by their properties (in bulk). The invention of the periodic system of elements by Dmitri Mendeleev was another great step forward.
633
Atomic_physics
https://en.wikipedia.org/wiki/Atomic_physics
206
10
2,024
8
10
0
30
58
0
The true beginning of atomic physics is marked by the discovery of spectral lines and attempts to describe the phenomenon, most notably by Joseph von Fraunhofer. The study of these lines led to the Bohr atom model and to the birth of quantum mechanics. In seeking to explain atomic spectra, an entirely new mathematical model of matter was revealed. As far as atoms and their electron shells were concerned, not only did this yield a better overall description, i.e. the atomic orbital model, but it also provided a new theoretical basis for chemistry (quantum chemistry) and spectroscopy.
589
Atomic_physics
https://en.wikipedia.org/wiki/Atomic_physics
207
11
2,024
8
10
0
30
58
0
Since the Second World War, both theoretical and experimental fields have advanced at a rapid pace. This can be attributed to progress in computing technology, which has allowed larger and more sophisticated models of atomic structure and associated collision processes. Similar technological advances in accelerators, detectors, magnetic field generation and lasers have greatly assisted experimental work.
407
Atomic_physics
https://en.wikipedia.org/wiki/Atomic_physics
208
12
2,024
8
10
0
30
58
0
Alpha decay or α-decay is a type of radioactive decay in which an atomic nucleus emits an alpha particle (helium nucleus) and thereby transforms or "decays" into a different atomic nucleus, with a mass number that is reduced by four and an atomic number that is reduced by two. An alpha particle is identical to the nucleus of a helium-4 atom, which consists of two protons and two neutrons. It has a charge of +2 e and a mass of 4 Da. For example, uranium-238 decays to form thorium-234.
488
Alpha_decay
https://en.wikipedia.org/wiki/Alpha_decay
209
0
2,024
8
10
0
30
58
0
While alpha particles have a charge +2 e, this is not usually shown because a nuclear equation describes a nuclear reaction without considering the electrons – a convention that does not imply that the nuclei necessarily occur in neutral atoms.
244
Alpha_decay
https://en.wikipedia.org/wiki/Alpha_decay
210
1
2,024
8
10
0
30
58
0
Alpha decay typically occurs in the heaviest nuclides. Theoretically, it can occur only in nuclei somewhat heavier than nickel (element 28), where the overall binding energy per nucleon is no longer a maximum and the nuclides are therefore unstable toward spontaneous fission-type processes. In practice, this mode of decay has only been observed in nuclides considerably heavier than nickel, with the lightest known alpha emitter being the second lightest isotope of antimony, Sb. Exceptionally, however, beryllium-8 decays to two alpha particles.
548
Alpha_decay
https://en.wikipedia.org/wiki/Alpha_decay
211
2
2,024
8
10
0
30
58
0
Alpha decay is by far the most common form of cluster decay, where the parent atom ejects a defined daughter collection of nucleons, leaving another defined product behind. It is the most common form because of the combined extremely high nuclear binding energy and relatively small mass of the alpha particle. Like other cluster decays, alpha decay is fundamentally a quantum tunneling process. Unlike beta decay, it is governed by the interplay between both the strong nuclear force and the electromagnetic force.
515
Alpha_decay
https://en.wikipedia.org/wiki/Alpha_decay
212
3
2,024
8
10
0
30
58
0
Alpha particles have a typical kinetic energy of 5 MeV (or ≈ 0.13% of their total energy, 110 TJ/kg) and have a speed of about 15,000,000 m/s, or 5% of the speed of light. There is surprisingly small variation around this energy, due to the strong dependence of the half-life of this process on the energy produced. Because of their relatively large mass, the electric charge of +2 e and relatively low velocity, alpha particles are very likely to interact with other atoms and lose their energy, and their forward motion can be stopped by a few centimeters of air.
565
Alpha_decay
https://en.wikipedia.org/wiki/Alpha_decay
213
4
2,024
8
10
0
30
58
0
Approximately 99% of the helium produced on Earth is the result of the alpha decay of underground deposits of minerals containing uranium or thorium. The helium is brought to the surface as a by-product of natural gas production.
229
Alpha_decay
https://en.wikipedia.org/wiki/Alpha_decay
214
5
2,024
8
10
0
30
58
0
Alpha particles were first described in the investigations of radioactivity by Ernest Rutherford in 1899, and by 1907 they were identified as He ions. By 1928, George Gamow had solved the theory of alpha decay via tunneling. The alpha particle is trapped inside the nucleus by an attractive nuclear potential well and a repulsive electromagnetic potential barrier. Classically, it is forbidden to escape, but according to the (then) newly discovered principles of quantum mechanics, it has a tiny (but non-zero) probability of " tunneling " through the barrier and appearing on the other side to escape the nucleus. Gamow solved a model potential for the nucleus and derived, from first principles, a relationship between the half-life of the decay, and the energy of the emission, which had been previously discovered empirically and was known as the Geiger–Nuttall law.
871
Alpha_decay
https://en.wikipedia.org/wiki/Alpha_decay
215
6
2,024
8
10
0
30
58
0
The nuclear force holding an atomic nucleus together is very strong, in general much stronger than the repulsive electromagnetic forces between the protons. However, the nuclear force is also short-range, dropping quickly in strength beyond about 3 femtometers, while the electromagnetic force has an unlimited range. The strength of the attractive nuclear force keeping a nucleus together is thus proportional to the number of the nucleons, but the total disruptive electromagnetic force of proton-proton repulsion trying to break the nucleus apart is roughly proportional to the square of its atomic number. A nucleus with 210 or more nucleons is so large that the strong nuclear force holding it together can just barely counterbalance the electromagnetic repulsion between the protons it contains. Alpha decay occurs in such nuclei as a means of increasing stability by reducing size.
888
Alpha_decay
https://en.wikipedia.org/wiki/Alpha_decay
216
7
2,024
8
10
0
30
58
0
One curiosity is why alpha particles, helium nuclei, should be preferentially emitted as opposed to other particles like a single proton or neutron or other atomic nuclei. Part of the reason is the high binding energy of the alpha particle, which means that its mass is less than the sum of the masses of two free protons and two free neutrons. This increases the disintegration energy. Computing the total disintegration energy given by the equation E d i = ( m i − − m f − − m p ) c 2 , where m i is the initial mass of the nucleus, m f is the mass of the nucleus after particle emission, and m p is the mass of the emitted (alpha-)particle, one finds that in certain cases it is positive and so alpha particle emission is possible, whereas other decay modes would require energy to be added. For example, performing the calculation for uranium-232 shows that alpha particle emission releases 5.4 MeV of energy, while a single proton emission would require 6.1 MeV. Most of the disintegration energy becomes the kinetic energy of the alpha particle, although to fulfill conservation of momentum, part of the energy goes to the recoil of the nucleus itself (see atomic recoil). However, since the mass numbers of most alpha-emitting radioisotopes exceed 210, far greater than the mass number of the alpha particle (4), the fraction of the energy going to the recoil of the nucleus is generally quite small, less than 2%. Nevertheless, the recoil energy (on the scale of keV) is still much larger than the strength of chemical bonds (on the scale of eV), so the daughter nuclide will break away from the chemical environment the parent was in. The energies and ratios of the alpha particles can be used to identify the radioactive parent via alpha spectrometry.
1,761
Alpha_decay
https://en.wikipedia.org/wiki/Alpha_decay
217
8
2,024
8
10
0
30
58
0
These disintegration energies, however, are substantially smaller than the repulsive potential barrier created by the interplay between the strong nuclear and the electromagnetic force, which prevents the alpha particle from escaping. The energy needed to bring an alpha particle from infinity to a point near the nucleus just outside the range of the nuclear force's influence is generally in the range of about 25 MeV. An alpha particle within the nucleus can be thought of as being inside a potential barrier whose walls are 25 MeV above the potential at infinity. However, decay alpha particles only have energies of around 4 to 9 MeV above the potential at infinity, far less than the energy needed to overcome the barrier and escape.
739
Alpha_decay
https://en.wikipedia.org/wiki/Alpha_decay
218
9
2,024
8
10
0
30
58
0
Quantum mechanics, however, allows the alpha particle to escape via quantum tunneling. The quantum tunneling theory of alpha decay, independently developed by George Gamow and by Ronald Wilfred Gurney and Edward Condon in 1928, was hailed as a very striking confirmation of quantum theory. Essentially, the alpha particle escapes from the nucleus not by acquiring enough energy to pass over the wall confining it, but by tunneling through the wall. Gurney and Condon made the following observation in their paper on it:
519
Alpha_decay
https://en.wikipedia.org/wiki/Alpha_decay
219
10
2,024
8
10
0
30
58
0
It has hitherto been necessary to postulate some special arbitrary 'instability' of the nucleus, but in the following note, it is pointed out that disintegration is a natural consequence of the laws of quantum mechanics without any special hypothesis... Much has been written of the explosive violence with which the α-particle is hurled from its place in the nucleus. But from the process pictured above, one would rather say that the α-particle almost slips away unnoticed.
475
Alpha_decay
https://en.wikipedia.org/wiki/Alpha_decay
220
11
2,024
8
10
0
30
58
0
The theory supposes that the alpha particle can be considered an independent particle within a nucleus, that is in constant motion but held within the nucleus by strong interaction. At each collision with the repulsive potential barrier of the electromagnetic force, there is a small non-zero probability that it will tunnel its way out. An alpha particle with a speed of 1.5×10 m/s within a nuclear diameter of approximately 10 m will collide with the barrier more than 10 times per second. However, if the probability of escape at each collision is very small, the half-life of the radioisotope will be very long, since it is the time required for the total probability of escape to reach 50%. As an extreme example, the half-life of the isotope bismuth-209 is 2.01 × 10 years.
779
Alpha_decay
https://en.wikipedia.org/wiki/Alpha_decay
221
12
2,024
8
10
0
30
58
0
The isotopes in beta-decay stable isobars that are also stable with regards to double beta decay with mass number A = 5, A = 8, 143 ≤ A ≤ 155, 160 ≤ A ≤ 162, and A ≥ 165 are theorized to undergo alpha decay. All other mass numbers (isobars) have exactly one theoretically stable nuclide. Those with mass 5 decay to helium-4 and a proton or a neutron, and those with mass 8 decay to two helium-4 nuclei; their half-lives (helium-5, lithium-5, and beryllium-8) are very short, unlike the half-lives for all other such nuclides with A ≤ 209, which are very long. (Such nuclides with A ≤ 209 are primordial nuclides except Sm.)
623
Alpha_decay
https://en.wikipedia.org/wiki/Alpha_decay
222
13
2,024
8
10
0
30
58
0
Working out the details of the theory leads to an equation relating the half-life of a radioisotope to the decay energy of its alpha particles, a theoretical derivation of the empirical Geiger–Nuttall law.
205
Alpha_decay
https://en.wikipedia.org/wiki/Alpha_decay
223
14
2,024
8
10
0
30
58
0
Americium-241, an alpha emitter, is used in smoke detectors. The alpha particles ionize air in an open ion chamber and a small current flows through the ionized air. Smoke particles from the fire that enter the chamber reduce the current, triggering the smoke detector's alarm.
277
Alpha_decay
https://en.wikipedia.org/wiki/Alpha_decay
224
15
2,024
8
10
0
30
58
0
Radium-223 is also an alpha emitter. It is used in the treatment of skeletal metastases (cancers in the bones).
111
Alpha_decay
https://en.wikipedia.org/wiki/Alpha_decay
225
16
2,024
8
10
0
30
58
0
Alpha decay can provide a safe power source for radioisotope thermoelectric generators used for space probes and were used for artificial heart pacemakers. Alpha decay is much more easily shielded against than other forms of radioactive decay.
243
Alpha_decay
https://en.wikipedia.org/wiki/Alpha_decay
226
17
2,024
8
10
0
30
58
0
Static eliminators typically use polonium-210, an alpha emitter, to ionize the air, allowing the "static cling" to dissipate more rapidly.
138
Alpha_decay
https://en.wikipedia.org/wiki/Alpha_decay
227
18
2,024
8
10
0
30
58
0
Highly charged and heavy, alpha particles lose their several MeV of energy within a small volume of material, along with a very short mean free path. This increases the chance of double-strand breaks to the DNA in cases of internal contamination, when ingested, inhaled, injected or introduced through the skin. Otherwise, touching an alpha source is typically not harmful, as alpha particles are effectively shielded by a few centimeters of air, a piece of paper, or the thin layer of dead skin cells that make up the epidermis ; however, many alpha sources are also accompanied by beta-emitting radio daughters, and both are often accompanied by gamma photon emission.
670
Alpha_decay
https://en.wikipedia.org/wiki/Alpha_decay
228
19
2,024
8
10
0
30
58
0
Relative biological effectiveness (RBE) quantifies the ability of radiation to cause certain biological effects, notably either cancer or cell-death, for equivalent radiation exposure. Alpha radiation has a high linear energy transfer (LET) coefficient, which is about one ionization of a molecule/atom for every angstrom of travel by the alpha particle. The RBE has been set at the value of 20 for alpha radiation by various government regulations. The RBE is set at 10 for neutron irradiation, and at 1 for beta radiation and ionizing photons.
545
Alpha_decay
https://en.wikipedia.org/wiki/Alpha_decay
229
20
2,024
8
10
0
30
58
0
However, the recoil of the parent nucleus (alpha recoil) gives it a significant amount of energy, which also causes ionization damage (see ionizing radiation). This energy is roughly the weight of the alpha (4 Da) divided by the weight of the parent (typically about 200 Da) times the total energy of the alpha. By some estimates, this might account for most of the internal radiation damage, as the recoil nucleus is part of an atom that is much larger than an alpha particle, and causes a very dense trail of ionization; the atom is typically a heavy metal, which preferentially collect on the chromosomes. In some studies, this has resulted in an RBE approaching 1,000 instead of the value used in governmental regulations.
726
Alpha_decay
https://en.wikipedia.org/wiki/Alpha_decay
230
21
2,024
8
10
0
30
58
0
The largest natural contributor to public radiation dose is radon, a naturally occurring, radioactive gas found in soil and rock. If the gas is inhaled, some of the radon particles may attach to the inner lining of the lung. These particles continue to decay, emitting alpha particles, which can damage cells in the lung tissue. The death of Marie Curie at age 66 from aplastic anemia was probably caused by prolonged exposure to high doses of ionizing radiation, but it is not clear if this was due to alpha radiation or X-rays. Curie worked extensively with radium, which decays into radon, along with other radioactive materials that emit beta and gamma rays. However, Curie also worked with unshielded X-ray tubes during World War I, and analysis of her skeleton during a reburial showed a relatively low level of radioisotope burden.
838
Alpha_decay
https://en.wikipedia.org/wiki/Alpha_decay
231
22
2,024
8
10
0
30
58
0
The Russian defector Alexander Litvinenko 's 2006 murder by radiation poisoning is thought to have been carried out with polonium-210, an alpha emitter.
152
Alpha_decay
https://en.wikipedia.org/wiki/Alpha_decay
232
23
2,024
8
10
0
30
58
0
Onia
4
Antiparticle
https://en.wikipedia.org/wiki/Antiparticle
233
0
2,024
8
10
0
30
58
0
In particle physics, every type of particle of "ordinary" matter (as opposed to antimatter) is associated with an antiparticle with the same mass but with opposite physical charges (such as electric charge). For example, the antiparticle of the electron is the positron (also known as an antielectron). While the electron has a negative electric charge, the positron has a positive electric charge, and is produced naturally in certain types of radioactive decay. The opposite is also true: the antiparticle of the positron is the electron.
540
Antiparticle
https://en.wikipedia.org/wiki/Antiparticle
234
1
2,024
8
10
0
30
58
0
Some particles, such as the photon, are their own antiparticle. Otherwise, for each pair of antiparticle partners, one is designated as the normal particle (the one that occurs in matter usually interacted with in daily life). The other (usually given the prefix "anti-") is designated the antiparticle.
303
Antiparticle
https://en.wikipedia.org/wiki/Antiparticle
235
2
2,024
8
10
0
30
58
0
Particle–antiparticle pairs can annihilate each other, producing photons ; since the charges of the particle and antiparticle are opposite, total charge is conserved. For example, the positrons produced in natural radioactive decay quickly annihilate themselves with electrons, producing pairs of gamma rays, a process exploited in positron emission tomography.
361
Antiparticle
https://en.wikipedia.org/wiki/Antiparticle
236
3
2,024
8
10
0
30
58
0
The laws of nature are very nearly symmetrical with respect to particles and antiparticles. For example, an antiproton and a positron can form an antihydrogen atom, which is believed to have the same properties as a hydrogen atom. This leads to the question of why the formation of matter after the Big Bang resulted in a universe consisting almost entirely of matter, rather than being a half-and-half mixture of matter and antimatter. The discovery of charge parity violation helped to shed light on this problem by showing that this symmetry, originally thought to be perfect, was only approximate. The question about how the formation of matter after the Big Bang resulted in a universe consisting almost entirely of matter remains an unanswered one, and explanations so far are not truly satisfactory, overall.
815
Antiparticle
https://en.wikipedia.org/wiki/Antiparticle
237
4
2,024
8
10
0
30
58
0
Because charge is conserved, it is not possible to create an antiparticle without either destroying another particle of the same charge (as is for instance the case when antiparticles are produced naturally via beta decay or the collision of cosmic rays with Earth's atmosphere), or by the simultaneous creation of both a particle and its antiparticle (pair production), which can occur in particle accelerators such as the Large Hadron Collider at CERN.
454
Antiparticle
https://en.wikipedia.org/wiki/Antiparticle
238
5
2,024
8
10
0
30
58
0
Particles and their antiparticles have equal and opposite charges, so that an uncharged particle also gives rise to an uncharged antiparticle. In many cases, the antiparticle and the particle coincide: pairs of photons, Z bosons, π mesons, and hypothetical gravitons and some hypothetical WIMPs all self-annihilate. However, electrically neutral particles need not be identical to their antiparticles: for example, the neutron and antineutron are distinct.
456
Antiparticle
https://en.wikipedia.org/wiki/Antiparticle
239
6
2,024
8
10
0
30
58
0
In 1932, soon after the prediction of positrons by Paul Dirac, Carl D. Anderson found that cosmic-ray collisions produced these particles in a cloud chamber – a particle detector in which moving electrons (or positrons) leave behind trails as they move through the gas. The electric charge-to-mass ratio of a particle can be measured by observing the radius of curling of its cloud-chamber track in a magnetic field. Positrons, because of the direction that their paths curled, were at first mistaken for electrons travelling in the opposite direction. Positron paths in a cloud-chamber trace the same helical path as an electron but rotate in the opposite direction with respect to the magnetic field direction due to their having the same magnitude of charge-to-mass ratio but with opposite charge and, therefore, opposite signed charge-to-mass ratios.
854
Antiparticle
https://en.wikipedia.org/wiki/Antiparticle
240
7
2,024
8
10
0
30
58
0
The antiproton and antineutron were found by Emilio Segrè and Owen Chamberlain in 1955 at the University of California, Berkeley. Since then, the antiparticles of many other subatomic particles have been created in particle accelerator experiments. In recent years, complete atoms of antimatter have been assembled out of antiprotons and positrons, collected in electromagnetic traps.
384
Antiparticle
https://en.wikipedia.org/wiki/Antiparticle
241
8
2,024
8
10
0
30
58
0
... the development of quantum field theory made the interpretation of antiparticles as holes unnecessary, even though it lingers on in many textbooks.
151
Antiparticle
https://en.wikipedia.org/wiki/Antiparticle
242
9
2,024
8
10
0
30
58
0
Steven Weinberg
15
Antiparticle
https://en.wikipedia.org/wiki/Antiparticle
243
10
2,024
8
10
0
30
58
0
Solutions of the Dirac equation contain negative energy quantum states. As a result, an electron could always radiate energy and fall into a negative energy state. Even worse, it could keep radiating infinite amounts of energy because there were infinitely many negative energy states available. To prevent this unphysical situation from happening, Dirac proposed that a "sea" of negative-energy electrons fills the universe, already occupying all of the lower-energy states so that, due to the Pauli exclusion principle, no other electron could fall into them. Sometimes, however, one of these negative-energy particles could be lifted out of this Dirac sea to become a positive-energy particle. But, when lifted out, it would leave behind a hole in the sea that would act exactly like a positive-energy electron with a reversed charge. These holes were interpreted as "negative-energy electrons" by Paul Dirac and mistakenly identified with protons in his 1930 paper A Theory of Electrons and Protons However, these "negative-energy electrons" turned out to be positrons, and not protons.
1,090
Antiparticle
https://en.wikipedia.org/wiki/Antiparticle
244
11
2,024
8
10
0
30
58
0
This picture implied an infinite negative charge for the universe – a problem of which Dirac was aware. Dirac tried to argue that we would perceive this as the normal state of zero charge. Another difficulty was the difference in masses of the electron and the proton. Dirac tried to argue that this was due to the electromagnetic interactions with the sea, until Hermann Weyl proved that hole theory was completely symmetric between negative and positive charges. Dirac also predicted a reaction e + p → γ + γ, where an electron and a proton annihilate to give two photons. Robert Oppenheimer and Igor Tamm, however, proved that this would cause ordinary matter to disappear too fast. A year later, in 1931, Dirac modified his theory and postulated the positron, a new particle of the same mass as the electron. The discovery of this particle the next year removed the last two objections to his theory.
904
Antiparticle
https://en.wikipedia.org/wiki/Antiparticle
245
12
2,024
8
10
0
30
58
0
Within Dirac's theory, the problem of infinite charge of the universe remains. Some bosons also have antiparticles, but since bosons do not obey the Pauli exclusion principle (only fermions do), hole theory does not work for them. A unified interpretation of antiparticles is now available in quantum field theory, which solves both these problems by describing antimatter as negative energy states of the same underlying matter field, i.e. particles moving backwards in time.
476
Antiparticle
https://en.wikipedia.org/wiki/Antiparticle
246
13
2,024
8
10
0
30
58
0
If a particle and antiparticle are in the appropriate quantum states, then they can annihilate each other and produce other particles. Reactions such as e + e → γ γ (the two-photon annihilation of an electron-positron pair) are an example. The single-photon annihilation of an electron-positron pair, e + e → γ, cannot occur in free space because it is impossible to conserve energy and momentum together in this process. However, in the Coulomb field of a nucleus the translational invariance is broken and single-photon annihilation may occur. The reverse reaction (in free space, without an atomic nucleus) is also impossible for this reason. In quantum field theory, this process is allowed only as an intermediate quantum state for times short enough that the violation of energy conservation can be accommodated by the uncertainty principle. This opens the way for virtual pair production or annihilation in which a one particle quantum state may fluctuate into a two particle state and back. These processes are important in the vacuum state and renormalization of a quantum field theory. It also opens the way for neutral particle mixing through processes such as the one pictured here, which is a complicated example of mass renormalization.
1,250
Antiparticle
https://en.wikipedia.org/wiki/Antiparticle
247
14
2,024
8
10
0
30
58
0
Quantum states of a particle and an antiparticle are interchanged by the combined application of charge conjugation C , parity P and time reversal T . C and P are linear, unitary operators, T is antilinear and antiunitary, ⟨ ⟨ Ψ Ψ | T Φ Φ ⟩ ⟩ = ⟨ ⟨ Φ Φ | T − − 1 Ψ Ψ ⟩ ⟩ . If | p , σ σ , n ⟩ ⟩ denotes the quantum state of a particle n with momentum p and spin J whose component in the z-direction is σ σ , then one has
419
Antiparticle
https://en.wikipedia.org/wiki/Antiparticle
248
15
2,024
8
10
0
30
58
0
where n c denotes the charge conjugate state, that is, the antiparticle. In particular a massive particle and its antiparticle transform under the same irreducible representation of the Poincaré group which means the antiparticle has the same mass and the same spin.
266
Antiparticle
https://en.wikipedia.org/wiki/Antiparticle
249
16
2,024
8
10
0
30
58
0
If C , P and T can be defined separately on the particles and antiparticles, then
81
Antiparticle
https://en.wikipedia.org/wiki/Antiparticle
250
17
2,024
8
10
0
30
58
0
where the proportionality sign indicates that there might be a phase on the right hand side.
92
Antiparticle
https://en.wikipedia.org/wiki/Antiparticle
251
18
2,024
8
10
0
30
58
0
As C P T anticommutes with the charges, C P T Q = − − Q C P T , particle and antiparticle have opposite electric charges q and -q.
130
Antiparticle
https://en.wikipedia.org/wiki/Antiparticle
252
19
2,024
8
10
0
30
58
0
One may try to quantize an electron field without mixing the annihilation and creation operators by writing
107
Antiparticle
https://en.wikipedia.org/wiki/Antiparticle
253
20
2,024
8
10
0
30
58
0
where we use the symbol k to denote the quantum numbers p and σ of the previous section and the sign of the energy, E(k), and a k denotes the corresponding annihilation operators. Of course, since we are dealing with fermions, we have to have the operators satisfy canonical anti-commutation relations. However, if one now writes down the Hamiltonian
350
Antiparticle
https://en.wikipedia.org/wiki/Antiparticle
254
21
2,024
8
10
0
30
58
0
then one sees immediately that the expectation value of H need not be positive. This is because E(k) can have any sign whatsoever, and the combination of creation and annihilation operators has expectation value 1 or 0.
219
Antiparticle
https://en.wikipedia.org/wiki/Antiparticle
255
22
2,024
8
10
0
30
58
0
So one has to introduce the charge conjugate antiparticle field, with its own creation and annihilation operators satisfying the relations
138
Antiparticle
https://en.wikipedia.org/wiki/Antiparticle
256
23
2,024
8
10
0
30
58
0
where k has the same p, and opposite σ and sign of the energy. Then one can rewrite the field in the form
105
Antiparticle
https://en.wikipedia.org/wiki/Antiparticle
257
24
2,024
8
10
0
30
58
0
where the first sum is over positive energy states and the second over those of negative energy. The energy becomes
115
Antiparticle
https://en.wikipedia.org/wiki/Antiparticle
258
25
2,024
8
10
0
30
58
0
where E 0 is an infinite negative constant. The vacuum state is defined as the state with no particle or antiparticle, i.e., a k | 0 ⟩ ⟩ = 0 and b k | 0 ⟩ ⟩ = 0 . Then the energy of the vacuum is exactly E 0. Since all energies are measured relative to the vacuum, H is positive definite. Analysis of the properties of a k and b k shows that one is the annihilation operator for particles and the other for antiparticles. This is the case of a fermion.
452
Antiparticle
https://en.wikipedia.org/wiki/Antiparticle
259
26
2,024
8
10
0
30
58
0
This approach is due to Vladimir Fock, Wendell Furry and Robert Oppenheimer. If one quantizes a real scalar field, then one finds that there is only one kind of annihilation operator; therefore, real scalar fields describe neutral bosons. Since complex scalar fields admit two different kinds of annihilation operators, which are related by conjugation, such fields describe charged bosons.
390
Antiparticle
https://en.wikipedia.org/wiki/Antiparticle
260
27
2,024
8
10
0
30
58
0
By considering the propagation of the negative energy modes of the electron field backward in time, Ernst Stückelberg reached a pictorial understanding of the fact that the particle and antiparticle have equal mass m and spin J but opposite charges q. This allowed him to rewrite perturbation theory precisely in the form of diagrams. Richard Feynman later gave an independent systematic derivation of these diagrams from a particle formalism, and they are now called Feynman diagrams. Each line of a diagram represents a particle propagating either backward or forward in time. In Feynman diagrams, anti-particles are shown traveling backwards in time relative to normal matter, and vice versa. This technique is the most widespread method of computing amplitudes in quantum field theory today.
795
Antiparticle
https://en.wikipedia.org/wiki/Antiparticle
261
28
2,024
8
10
0
30
58
0
Since this picture was first developed by Stückelberg, and acquired its modern form in Feynman's work, it is called the Feynman–Stückelberg interpretation of antiparticles to honor both scientists.
197
Antiparticle
https://en.wikipedia.org/wiki/Antiparticle
262
29
2,024
8
10
0
30
58
0
Ḥasan Ibn al-Haytham (Latinized as Alhazen ; / æ l ˈ h æ z ən / ; full name Abū ʿAlī al-Ḥasan ibn al-Ḥasan ibn al-Haytham أبو علي، الحسن بن الحسن بن الهيثم ; c. 965 – c. 1040) was a medieval mathematician, astronomer, and physicist of the Islamic Golden Age from present-day Iraq. Referred to as "the father of modern optics", he made significant contributions to the principles of optics and visual perception in particular. His most influential work is titled Kitāb al-Manāẓir (Arabic : كتاب المناظر, "Book of Optics"), written during 1011–1021, which survived in a Latin edition. The works of Alhazen were frequently cited during the scientific revolution by Isaac Newton, Johannes Kepler, Christiaan Huygens, and Galileo Galilei.
733
Ibn_al-Haytham
https://en.wikipedia.org/wiki/Ibn_al-Haytham
263
0
2,024
8
10
0
30
59
0
Ibn al-Haytham was the first to correctly explain the theory of vision, and to argue that vision occurs in the brain, pointing to observations that it is subjective and affected by personal experience. He also stated the principle of least time for refraction which would later become the Fermat's principle. He made major contributions to catoptrics and dioptrics by studying reflection, refraction and nature of images formed by light rays. Ibn al-Haytham was an early proponent of the concept that a hypothesis must be supported by experiments based on confirmable procedures or mathematical reasoning—an early pioneer in the scientific method five centuries before Renaissance scientists, he is sometimes described as the world's "first true scientist". He was also a polymath, writing on philosophy, theology and medicine.
827
Ibn_al-Haytham
https://en.wikipedia.org/wiki/Ibn_al-Haytham
264
1
2,024
8
10
0
30
59
0
Born in Basra, he spent most of his productive period in the Fatimid capital of Cairo and earned his living authoring various treatises and tutoring members of the nobilities. Ibn al-Haytham is sometimes given the byname al-Baṣrī after his birthplace, or al-Miṣrī ("the Egyptian"). Al-Haytham was dubbed the "Second Ptolemy " by Abu'l-Hasan Bayhaqi and "The Physicist" by John Peckham. Ibn al-Haytham paved the way for the modern science of physical optics.
457
Ibn_al-Haytham
https://en.wikipedia.org/wiki/Ibn_al-Haytham
265
2
2,024
8
10
0
30
59
0
Ibn al-Haytham (Alhazen) was born c. 965 to a family of Arab or Persian origin in Basra, Iraq, which was at the time part of the Buyid emirate. His initial influences were in the study of religion and service to the community. At the time, society had a number of conflicting views of religion that he ultimately sought to step aside from religion. This led to him delving into the study of mathematics and science. He held a position with the title of vizier in his native Basra, and became famous for his knowledge of applied mathematics, as evidenced by his attempt to regulate the flooding of the Nile.
606
Ibn_al-Haytham
https://en.wikipedia.org/wiki/Ibn_al-Haytham
266
3
2,024
8
10
0
30
59
0
Upon his return to Cairo, he was given an administrative post. After he proved unable to fulfill this task as well, he contracted the ire of the caliph al-Hakim, and is said to have been forced into hiding until the caliph's death in 1021, after which his confiscated possessions were returned to him. Legend has it that Alhazen feigned madness and was kept under house arrest during this period. During this time, he wrote his influential Book of Optics. Alhazen continued to live in Cairo, in the neighborhood of the famous University of al-Azhar, and lived from the proceeds of his literary production until his death in c. 1040. (A copy of Apollonius ' Conics, written in Ibn al-Haytham's own handwriting exists in Aya Sofya : (MS Aya Sofya 2762, 307 fob., dated Safar 415 A.H. [1024]).)
791
Ibn_al-Haytham
https://en.wikipedia.org/wiki/Ibn_al-Haytham
267
4
2,024
8
10
0
30
59
0
Among his students were Sorkhab (Sohrab), a Persian from Semnan, and Abu al-Wafa Mubashir ibn Fatek, an Egyptian prince.
120
Ibn_al-Haytham
https://en.wikipedia.org/wiki/Ibn_al-Haytham
268
5
2,024
8
10
0
30
59
0
Alhazen's most famous work is his seven-volume treatise on optics Kitab al-Manazir (Book of Optics), written from 1011 to 1021. In it, Ibn al-Haytham was the first to explain that vision occurs when light reflects from an object and then passes to one's eyes, and to argue that vision occurs in the brain, pointing to observations that it is subjective and affected by personal experience.
389
Ibn_al-Haytham
https://en.wikipedia.org/wiki/Ibn_al-Haytham
269
6
2,024
8
10
0
30
59
0
Optics was translated into Latin by an unknown scholar at the end of the 12th century or the beginning of the 13th century.
123
Ibn_al-Haytham
https://en.wikipedia.org/wiki/Ibn_al-Haytham
270
7
2,024
8
10
0
30
59
0
This work enjoyed a great reputation during the Middle Ages. The Latin version of De aspectibus was translated at the end of the 14th century into Italian vernacular, under the title De li aspecti.
197
Ibn_al-Haytham
https://en.wikipedia.org/wiki/Ibn_al-Haytham
271
8
2,024
8
10
0
30
59
0
It was printed by Friedrich Risner in 1572, with the title Opticae thesaurus: Alhazeni Arabis libri septem, nuncprimum editi; Eiusdem liber De Crepusculis et nubium ascensionibus (English: Treasury of Optics: seven books by the Arab Alhazen, first edition; by the same, on twilight and the height of clouds). Risner is also the author of the name variant "Alhazen"; before Risner he was known in the west as Alhacen. Works by Alhazen on geometric subjects were discovered in the Bibliothèque nationale in Paris in 1834 by E. A. Sedillot. In all, A. Mark Smith has accounted for 18 full or near-complete manuscripts, and five fragments, which are preserved in 14 locations, including one in the Bodleian Library at Oxford, and one in the library of Bruges.
755
Ibn_al-Haytham
https://en.wikipedia.org/wiki/Ibn_al-Haytham
272
9
2,024
8
10
0
30
59
0
Two major theories on vision prevailed in classical antiquity. The first theory, the emission theory, was supported by such thinkers as Euclid and Ptolemy, who believed that sight worked by the eye emitting rays of light. The second theory, the intromission theory supported by Aristotle and his followers, had physical forms entering the eye from an object. Previous Islamic writers (such as al-Kindi) had argued essentially on Euclidean, Galenist, or Aristotelian lines. The strongest influence on the Book of Optics was from Ptolemy's Optics, while the description of the anatomy and physiology of the eye was based on Galen's account. Alhazen's achievement was to come up with a theory that successfully combined parts of the mathematical ray arguments of Euclid, the medical tradition of Galen, and the intromission theories of Aristotle. Alhazen's intromission theory followed al-Kindi (and broke with Aristotle) in asserting that "from each point of every colored body, illuminated by any light, issue light and color along every straight line that can be drawn from that point". This left him with the problem of explaining how a coherent image was formed from many independent sources of radiation; in particular, every point of an object would send rays to every point on the eye.
1,290
Ibn_al-Haytham
https://en.wikipedia.org/wiki/Ibn_al-Haytham
273
10
2,024
8
10
0
30
59
0
What Alhazen needed was for each point on an object to correspond to one point only on the eye. He attempted to resolve this by asserting that the eye would only perceive perpendicular rays from the object—for any one point on the eye, only the ray that reached it directly, without being refracted by any other part of the eye, would be perceived. He argued, using a physical analogy, that perpendicular rays were stronger than oblique rays: in the same way that a ball thrown directly at a board might break the board, whereas a ball thrown obliquely at the board would glance off, perpendicular rays were stronger than refracted rays, and it was only perpendicular rays which were perceived by the eye. As there was only one perpendicular ray that would enter the eye at any one point, and all these rays would converge on the centre of the eye in a cone, this allowed him to resolve the problem of each point on an object sending many rays to the eye; if only the perpendicular ray mattered, then he had a one-to-one correspondence and the confusion could be resolved. He later asserted (in book seven of the Optics) that other rays would be refracted through the eye and perceived as if perpendicular. His arguments regarding perpendicular rays do not clearly explain why only perpendicular rays were perceived; why would the weaker oblique rays not be perceived more weakly? His later argument that refracted rays would be perceived as if perpendicular does not seem persuasive. However, despite its weaknesses, no other theory of the time was so comprehensive, and it was enormously influential, particularly in Western Europe. Directly or indirectly, his De Aspectibus (Book of Optics) inspired much activity in optics between the 13th and 17th centuries. Kepler 's later theory of the retinal image (which resolved the problem of the correspondence of points on an object and points in the eye) built directly on the conceptual framework of Alhazen.
1,958
Ibn_al-Haytham
https://en.wikipedia.org/wiki/Ibn_al-Haytham
274
11
2,024
8
10
0
30
59
0
Alhazen showed through experiment that light travels in straight lines, and carried out various experiments with lenses, mirrors, refraction, and reflection. His analyses of reflection and refraction considered the vertical and horizontal components of light rays separately.
275
Ibn_al-Haytham
https://en.wikipedia.org/wiki/Ibn_al-Haytham
275
12
2,024
8
10
0
30
59
0
Alhazen studied the process of sight, the structure of the eye, image formation in the eye, and the visual system. Ian P. Howard argued in a 1996 Perception article that Alhazen should be credited with many discoveries and theories previously attributed to Western Europeans writing centuries later. For example, he described what became in the 19th century Hering's law of equal innervation. He wrote a description of vertical horopters 600 years before Aguilonius that is actually closer to the modern definition than Aguilonius's—and his work on binocular disparity was repeated by Panum in 1858. Craig Aaen-Stockdale, while agreeing that Alhazen should be credited with many advances, has expressed some caution, especially when considering Alhazen in isolation from Ptolemy, with whom Alhazen was extremely familiar. Alhazen corrected a significant error of Ptolemy regarding binocular vision, but otherwise his account is very similar; Ptolemy also attempted to explain what is now called Hering's law. In general, Alhazen built on and expanded the optics of Ptolemy.
1,073
Ibn_al-Haytham
https://en.wikipedia.org/wiki/Ibn_al-Haytham
276
13
2,024
8
10
0
30
59
0
In a more detailed account of Ibn al-Haytham's contribution to the study of binocular vision based on Lejeune and Sabra, Raynaud showed that the concepts of correspondence, homonymous and crossed diplopia were in place in Ibn al-Haytham's optics. But contrary to Howard, he explained why Ibn al-Haytham did not give the circular figure of the horopter and why, by reasoning experimentally, he was in fact closer to the discovery of Panum's fusional area than that of the Vieth-Müller circle. In this regard, Ibn al-Haytham's theory of binocular vision faced two main limits: the lack of recognition of the role of the retina, and obviously the lack of an experimental investigation of ocular tracts.
699
Ibn_al-Haytham
https://en.wikipedia.org/wiki/Ibn_al-Haytham
277
14
2,024
8
10
0
30
59
0
Alhazen's most original contribution was that, after describing how he thought the eye was anatomically constructed, he went on to consider how this anatomy would behave functionally as an optical system. His understanding of pinhole projection from his experiments appears to have influenced his consideration of image inversion in the eye, which he sought to avoid. He maintained that the rays that fell perpendicularly on the lens (or glacial humor as he called it) were further refracted outward as they left the glacial humor and the resulting image thus passed upright into the optic nerve at the back of the eye. He followed Galen in believing that the lens was the receptive organ of sight, although some of his work hints that he thought the retina was also involved.
776
Ibn_al-Haytham
https://en.wikipedia.org/wiki/Ibn_al-Haytham
278
15
2,024
8
10
0
30
59
0
Alhazen's synthesis of light and vision adhered to the Aristotelian scheme, exhaustively describing the process of vision in a logical, complete fashion.
153
Ibn_al-Haytham
https://en.wikipedia.org/wiki/Ibn_al-Haytham
279
16
2,024
8
10
0
30
59
0
His research in catoptrics (the study of optical systems using mirrors) was centred on spherical and parabolic mirrors and spherical aberration. He made the observation that the ratio between the angle of incidence and refraction does not remain constant, and investigated the magnifying power of a lens.
304
Ibn_al-Haytham
https://en.wikipedia.org/wiki/Ibn_al-Haytham
280
17
2,024
8
10
0
30
59
0
Alhazen was the first physicist to give complete statement of the law of reflection. He was first to state that the incident ray, the reflected ray, and the normal to the surface all lie in a same plane perpendicular to reflecting plane.
237
Ibn_al-Haytham
https://en.wikipedia.org/wiki/Ibn_al-Haytham
281
18
2,024
8
10
0
30
59
0
His work on catoptrics in Book V of the Book of Optics contains a discussion of what is now known as Alhazen's problem, first formulated by Ptolemy in 150 AD. It comprises drawing lines from two points in the plane of a circle meeting at a point on the circumference and making equal angles with the normal at that point. This is equivalent to finding the point on the edge of a circular billiard table at which a player must aim a cue ball at a given point to make it bounce off the table edge and hit another ball at a second given point. Thus, its main application in optics is to solve the problem, "Given a light source and a spherical mirror, find the point on the mirror where the light will be reflected to the eye of an observer." This leads to an equation of the fourth degree. This eventually led Alhazen to derive a formula for the sum of fourth powers, where previously only the formulas for the sums of squares and cubes had been stated. His method can be readily generalized to find the formula for the sum of any integral powers, although he did not himself do this (perhaps because he only needed the fourth power to calculate the volume of the paraboloid he was interested in). He used his result on sums of integral powers to perform what would now be called an integration, where the formulas for the sums of integral squares and fourth powers allowed him to calculate the volume of a paraboloid. Alhazen eventually solved the problem using conic sections and a geometric proof. His solution was extremely long and complicated and may not have been understood by mathematicians reading him in Latin translation. Later mathematicians used Descartes ' analytical methods to analyse the problem. An algebraic solution to the problem was finally found in 1965 by Jack M. Elkin, an actuarian. Other solutions were discovered in 1989, by Harald Riede and in 1997 by the Oxford mathematician Peter M. Neumann. Recently, Mitsubishi Electric Research Laboratories (MERL) researchers solved the extension of Alhazen's problem to general rotationally symmetric quadric mirrors including hyperbolic, parabolic and elliptical mirrors.
2,141
Ibn_al-Haytham
https://en.wikipedia.org/wiki/Ibn_al-Haytham
282
19
2,024
8
10
0
30
59
0
The camera obscura was known to the ancient Chinese, and was described by the Han Chinese polymath Shen Kuo in his scientific book Dream Pool Essays, published in the year 1088 C.E. Aristotle had discussed the basic principle behind it in his Problems, but Alhazen's work contained the first clear description of camera obscura. and early analysis of the device.
362
Ibn_al-Haytham
https://en.wikipedia.org/wiki/Ibn_al-Haytham
283
20
2,024
8
10
0
30
59
0
Ibn al-Haytham used a camera obscura mainly to observe a partial solar eclipse. In his essay, Ibn al-Haytham writes that he observed the sickle-like shape of the sun at the time of an eclipse. The introduction reads as follows: "The image of the sun at the time of the eclipse, unless it is total, demonstrates that when its light passes through a narrow, round hole and is cast on a plane opposite to the hole it takes on the form of a moonsickle."
449
Ibn_al-Haytham
https://en.wikipedia.org/wiki/Ibn_al-Haytham
284
21
2,024
8
10
0
30
59
0
It is admitted that his findings solidified the importance in the history of the camera obscura but this treatise is important in many other respects.
150
Ibn_al-Haytham
https://en.wikipedia.org/wiki/Ibn_al-Haytham
285
22
2,024
8
10
0
30
59
0
Ancient optics and medieval optics were divided into optics and burning mirrors. Optics proper mainly focused on the study of vision, while burning mirrors focused on the properties of light and luminous rays. On the shape of the eclipse is probably one of the first attempts made by Ibn al-Haytham to articulate these two sciences.
332
Ibn_al-Haytham
https://en.wikipedia.org/wiki/Ibn_al-Haytham
286
23
2,024
8
10
0
30
59
0
Very often Ibn al-Haytham's discoveries benefited from the intersection of mathematical and experimental contributions. This is the case with On the shape of the eclipse. Besides the fact that this treatise allowed more people to study partial eclipses of the sun, it especially allowed to better understand how the camera obscura works. This treatise is a physico-mathematical study of image formation inside the camera obscura. Ibn al-Haytham takes an experimental approach, and determines the result by varying the size and the shape of the aperture, the focal length of the camera, the shape and intensity of the light source.
630
Ibn_al-Haytham
https://en.wikipedia.org/wiki/Ibn_al-Haytham
287
24
2,024
8
10
0
30
59
0
In his work he explains the inversion of the image in the camera obscura, the fact that the image is similar to the source when the hole is small, but also the fact that the image can differ from the source when the hole is large. All these results are produced by using a point analysis of the image.
301
Ibn_al-Haytham
https://en.wikipedia.org/wiki/Ibn_al-Haytham
288
25
2,024
8
10
0
30
59
0
In the seventh tract of his book of optics, Alhazen described an apparatus for experimenting with various cases of refraction, in order to investigate the relations between the angle of incidence, the angle of refraction and the angle of deflection. This apparatus was a modified version of an apparatus used by Ptolemy for similar purpose.
340
Ibn_al-Haytham
https://en.wikipedia.org/wiki/Ibn_al-Haytham
289
26
2,024
8
10
0
30
59
0
Alhazen basically states the concept of unconscious inference in his discussion of colour before adding that the inferential step between sensing colour and differentiating it is shorter than the time taken between sensing and any other visible characteristic (aside from light), and that "time is so short as not to be clearly apparent to the beholder." Naturally, this suggests that the colour and form are perceived elsewhere. Alhazen goes on to say that information must travel to the central nerve cavity for processing and:
529
Ibn_al-Haytham
https://en.wikipedia.org/wiki/Ibn_al-Haytham
290
27
2,024
8
10
0
30
59
0
the sentient organ does not sense the forms that reach it from the visible objects until after it has been affected by these forms; thus it does not sense color as color or light as light until after it has been affected by the form of color or light. Now the affectation received by the sentient organ from the form of color or of light is a certain change; and change must take place in time; …..and it is in the time during which the form extends from the sentient organ's surface to the cavity of the common nerve, and in (the time) following that, that the sensitive faculty, which exists in the whole of the sentient body will perceive color as color…Thus the last sentient's perception of color as such and of light as such takes place at a time following that in which the form arrives from the surface of the sentient organ to the cavity of the common nerve.
867
Ibn_al-Haytham
https://en.wikipedia.org/wiki/Ibn_al-Haytham
291
28
2,024
8
10
0
30
59
0
Alhazen explained color constancy by observing that the light reflected from an object is modified by the object's color. He explained that the quality of the light and the color of the object are mixed, and the visual system separates light and color. In Book II, Chapter 3 he writes:
285
Ibn_al-Haytham
https://en.wikipedia.org/wiki/Ibn_al-Haytham
292
29
2,024
8
10
0
30
59
0
Again the light does not travel from the colored object to the eye unaccompanied by the color, nor does the form of the color pass from the colored object to the eye unaccompanied by the light. Neither the form of the light nor that of the color existing in the colored object can pass except as mingled together and the last sentient can only perceive them as mingled together. Nevertheless, the sentient perceives that the visible object is luminous and that the light seen in the object is other than the color and that these are two properties.
548
Ibn_al-Haytham
https://en.wikipedia.org/wiki/Ibn_al-Haytham
293
30
2,024
8
10
0
30
59
0
The Kitab al-Manazir (Book of Optics) describes several experimental observations that Alhazen made and how he used his results to explain certain optical phenomena using mechanical analogies. He conducted experiments with projectiles and concluded that only the impact of perpendicular projectiles on surfaces was forceful enough to make them penetrate, whereas surfaces tended to deflect oblique projectile strikes. For example, to explain refraction from a rare to a dense medium, he used the mechanical analogy of an iron ball thrown at a thin slate covering a wide hole in a metal sheet. A perpendicular throw breaks the slate and passes through, whereas an oblique one with equal force and from an equal distance does not. He also used this result to explain how intense, direct light hurts the eye, using a mechanical analogy: Alhazen associated 'strong' lights with perpendicular rays and 'weak' lights with oblique ones. The obvious answer to the problem of multiple rays and the eye was in the choice of the perpendicular ray, since only one such ray from each point on the surface of the object could penetrate the eye.
1,130
Ibn_al-Haytham
https://en.wikipedia.org/wiki/Ibn_al-Haytham
294
31
2,024
8
10
0
30
59
0
Sudanese psychologist Omar Khaleefa has argued that Alhazen should be considered the founder of experimental psychology, for his pioneering work on the psychology of visual perception and optical illusions. Khaleefa has also argued that Alhazen should also be considered the "founder of psychophysics ", a sub-discipline and precursor to modern psychology. Although Alhazen made many subjective reports regarding vision, there is no evidence that he used quantitative psychophysical techniques and the claim has been rebuffed.
526
Ibn_al-Haytham
https://en.wikipedia.org/wiki/Ibn_al-Haytham
295
32
2,024
8
10
0
30
59
0
Alhazen offered an explanation of the Moon illusion, an illusion that played an important role in the scientific tradition of medieval Europe. Many authors repeated explanations that attempted to solve the problem of the Moon appearing larger near the horizon than it does when higher up in the sky. Alhazen argued against Ptolemy's refraction theory, and defined the problem in terms of perceived, rather than real, enlargement. He said that judging the distance of an object depends on there being an uninterrupted sequence of intervening bodies between the object and the observer. When the Moon is high in the sky there are no intervening objects, so the Moon appears close. The perceived size of an object of constant angular size varies with its perceived distance. Therefore, the Moon appears closer and smaller high in the sky, and further and larger on the horizon. Through works by Roger Bacon, John Pecham and Witelo based on Alhazen's explanation, the Moon illusion gradually came to be accepted as a psychological phenomenon, with the refraction theory being rejected in the 17th century. Although Alhazen is often credited with the perceived distance explanation, he was not the first author to offer it. Cleomedes (c. 2nd century) gave this account (in addition to refraction), and he credited it to Posidonius (c. 135–50 BCE). Ptolemy may also have offered this explanation in his Optics, but the text is obscure. Alhazen's writings were more widely available in the Middle Ages than those of these earlier authors, and that probably explains why Alhazen received the credit.
1,591
Ibn_al-Haytham
https://en.wikipedia.org/wiki/Ibn_al-Haytham
296
33
2,024
8
10
0
30
59
0
Therefore, the seeker after the truth is not one who studies the writings of the ancients and, following his natural disposition, puts his trust in them, but rather the one who suspects his faith in them and questions what he gathers from them, the one who submits to argument and demonstration, and not to the sayings of a human being whose nature is fraught with all kinds of imperfection and deficiency. The duty of the man who investigates the writings of scientists, if learning the truth is his goal, is to make himself an enemy of all that he reads, and... attack it from every side. He should also suspect himself as he performs his critical examination of it, so that he may avoid falling into either prejudice or leniency.
732
Ibn_al-Haytham
https://en.wikipedia.org/wiki/Ibn_al-Haytham
297
34
2,024
8
10
0
30
59
0
An aspect associated with Alhazen's optical research is related to systemic and methodological reliance on experimentation (i'tibar)(Arabic: اختبار) and controlled testing in his scientific inquiries. Moreover, his experimental directives rested on combining classical physics (ilm tabi'i) with mathematics (ta'alim ; geometry in particular). This mathematical-physical approach to experimental science supported most of his propositions in Kitab al-Manazir (The Optics ; De aspectibus or Perspectivae) and grounded his theories of vision, light and colour, as well as his research in catoptrics and dioptrics (the study of the reflection and refraction of light, respectively).
678
Ibn_al-Haytham
https://en.wikipedia.org/wiki/Ibn_al-Haytham
298
35
2,024
8
10
0
30
59
0
According to Matthias Schramm, Alhazen "was the first to make a systematic use of the method of varying the experimental conditions in a constant and uniform manner, in an experiment showing that the intensity of the light-spot formed by the projection of the moonlight through two small apertures onto a screen diminishes constantly as one of the apertures is gradually blocked up." G. J. Toomer expressed some skepticism regarding Schramm's view, partly because at the time (1964) the Book of Optics had not yet been fully translated from Arabic, and Toomer was concerned that without context, specific passages might be read anachronistically. While acknowledging Alhazen's importance in developing experimental techniques, Toomer argued that Alhazen should not be considered in isolation from other Islamic and ancient thinkers. Toomer concluded his review by saying that it would not be possible to assess Schramm's claim that Ibn al-Haytham was the true founder of modern physics without translating more of Alhazen's work and fully investigating his influence on later medieval writers.
1,093
Ibn_al-Haytham
https://en.wikipedia.org/wiki/Ibn_al-Haytham
299
36
2,024
8
10
0
30
59
0