robench-2024b
Collection
48 items
•
Updated
context
stringlengths 100
10.3k
| A
stringlengths 100
7.26k
| B
stringlengths 100
5.61k
| C
stringlengths 100
10.3k
| D
stringlengths 100
3.93k
| label
stringclasses 4
values |
---|---|---|---|---|---|
Table 2: Oscillation amplitudes of a neutrino with different projected energies with assumed mass m=2eV𝑚2𝑒𝑉m=2eVitalic_m = 2 italic_e italic_V.
|
2.3×10−102.3superscript10102.3\times 10^{-10}2.3 × 10 start_POSTSUPERSCRIPT - 10 end_POSTSUPERSCRIPT
|
Let us consider an electron (ω0=7.6×1020s−1subscript𝜔07.6superscript1020superscript𝑠1\omega_{0}=7.6\times 10^{20}s^{-1}italic_ω start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT = 7.6 × 10 start_POSTSUPERSCRIPT 20 end_POSTSUPERSCRIPT italic_s start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT), the lightest elementary particle apart from neutrinos. From Eq. (32), the amplitude of the proper time oscillation is T̊0=1.3×10−21ssubscript̊𝑇01.3superscript1021𝑠\mathring{T}_{0}=1.3\times 10^{-21}sover̊ start_ARG italic_T end_ARG start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT = 1.3 × 10 start_POSTSUPERSCRIPT - 21 end_POSTSUPERSCRIPT italic_s. Projected at an energy of 1 TeV, the amplitudes from Eq.(72) are T̊=1.8×10−18s̊𝑇1.8superscript1018𝑠\mathring{T}=1.8\times 10^{-18}sover̊ start_ARG italic_T end_ARG = 1.8 × 10 start_POSTSUPERSCRIPT - 18 end_POSTSUPERSCRIPT italic_s and 𝐗̊=5.6×10−10m̊𝐗5.6superscript1010𝑚\mathring{\mathbf{X}}=5.6\times 10^{-10}mover̊ start_ARG bold_X end_ARG = 5.6 × 10 start_POSTSUPERSCRIPT - 10 end_POSTSUPERSCRIPT italic_m. Again, the oscillations are small for detection.
|
7.4×10−127.4superscript10127.4\times 10^{-12}7.4 × 10 start_POSTSUPERSCRIPT - 12 end_POSTSUPERSCRIPT
|
4.0×10−234.0superscript10234.0\times 10^{-23}4.0 × 10 start_POSTSUPERSCRIPT - 23 end_POSTSUPERSCRIPT
|
C
|
In addition we observe a number of further states for which likely assignments are shown in the figure. In particular we find two spin 3 F-wave states and another set of excited S-waves.
|
In addition we observe a number of further states for which likely assignments are shown in the figure. In particular we find two spin 3 F-wave states and another set of excited S-waves.
|
The results are listed in Table 5. In addition we take a look at the hyperfine splittings between spin-singlet and spin-triplet states
|
As a check, the kinetic masses for spin-averaged S-wave D𝐷Ditalic_D and Dssubscript𝐷𝑠D_{s}italic_D start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT mesons were also calculated. At our final choice κc=0.123subscript𝜅𝑐0.123\kappa_{c}=0.123italic_κ start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT = 0.123 the tuned kinetic masses agree with experimental values to better that 2%percent22\%2 %.
|
To disentangle spin-dependent from spin-independent contributions we further define spin-averaged masses
|
D
|
))=f^{2}(t(r))g_{S}(\beta(r))\left(E_{k}(\beta(r)),\beta^{\prime}(r)\right).( ⟨ over¯ start_ARG italic_E end_ARG start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT , italic_V ⟩ start_POSTSUBSCRIPT italic_M ( italic_x , italic_f ) end_POSTSUBSCRIPT ∘ roman_θ start_POSTSUPERSCRIPT italic_V end_POSTSUPERSCRIPT ( italic_r ) ) ( italic_α ( 0 ) ) = italic_f start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ( italic_t ( italic_r ) ) italic_g start_POSTSUBSCRIPT italic_S end_POSTSUBSCRIPT ( italic_β ( italic_r ) ) ( italic_E start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ( italic_β ( italic_r ) ) , italic_β start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ( italic_r ) ) .
|
(w,z)∈(h,k)⟨β,β′⟩𝑤𝑧ℎ𝑘𝛽superscript𝛽′(w,z)\in(h,k)\langle\,\beta,\,\beta^{\prime}\,\rangle( italic_w , italic_z ) ∈ ( italic_h , italic_k ) ⟨ italic_β , italic_β start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ⟩.
|
of the velocity β′superscript𝛽′\beta^{\prime}italic_β start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT of the projection β=σ∘α𝛽𝜎𝛼\beta=\sigma\circ\alphaitalic_β = italic_σ ∘ italic_α.
|
Now we can use [20, Prp. 12.22(2)] to express β′superscript𝛽′\beta^{\prime}italic_β start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT
|
β′superscript𝛽′\beta^{\prime}italic_β start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT from the velocity α′superscript𝛼′\alpha^{\prime}italic_α start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT
|
C
|
At present, not all processes of photon production in QGP and HG phase are amenable to a calculation of viscous (shear and bulk) corrections.
|
At present, not all processes of photon production in QGP and HG phase are amenable to a calculation of viscous (shear and bulk) corrections.
|
Take into account of the uncertainty of the system evolution, we ignored the viscous correction to the emission rate, which seems to work well in general. The calculated ptsubscript𝑝tp_{\rm t}italic_p start_POSTSUBSCRIPT roman_t end_POSTSUBSCRIPT spectra of direct photons from both initial conditions agree quite well with latest direct photon data, for all centralities.
|
The elliptic flow v2subscript𝑣2v_{2}italic_v start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT of direct photons for all three centralities in this calculation coincide with experimental data!
|
For example, the AMY rate covers the processes of all orders according to the hard thermal loop calculation AMY ,
|
D
|
Note that the operators N^σsubscript^𝑁𝜎\hat{N}_{\sigma}over^ start_ARG italic_N end_ARG start_POSTSUBSCRIPT italic_σ end_POSTSUBSCRIPT are defined very differently to a common definition of a sum over occupations ∑ia^σi†a^σisubscript𝑖subscriptsuperscript^𝑎†𝜎𝑖subscript^𝑎𝜎𝑖\sum_{i}\hat{a}^{\dagger}_{\sigma i}\hat{a}_{\sigma i}∑ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT over^ start_ARG italic_a end_ARG start_POSTSUPERSCRIPT † end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_σ italic_i end_POSTSUBSCRIPT over^ start_ARG italic_a end_ARG start_POSTSUBSCRIPT italic_σ italic_i end_POSTSUBSCRIPT. In our definition, the spreading of the basis wave functions ϕσi(x)subscriptitalic-ϕ𝜎𝑖𝑥\phi_{\sigma i}(x)italic_ϕ start_POSTSUBSCRIPT italic_σ italic_i end_POSTSUBSCRIPT ( italic_x ) to the neighboring well is automatically taken into account. Moreover, the definition used here treats single-particle superpositions appropriately, i.e. a left/right probability is calculated directly from the full single-particle density and not as a sum of partial probabilities.
|
In this article, we have studied the dynamical properties of two ultra-cold bosons confined in a one-dimensional double-well potential initially occupying the lowest state of a chosen site. We compare the exact dynamics governed by a full two-body Hamiltonian with two simplified two-mode models. In particular, we compared the evolution of particle density and spatial correlations between particles. We show that for a shallow barrier and strong enough interactions the simplified models break down and the correct multi-orbital description cannot be substituted with a two-mode model even if all appropriate interaction terms are taken into account. The fundamental difference between the exact and two-mode descriptions emerges when inter-particle correlations are considered. For example, the evolution of the probability that both bosons are found in opposite wells of the potential crucially depends on couplings to higher orbitals of an external potential. This fact sheds some light on recent theoretical results and opens some perspectives for further experimental explorations.
|
To study the dynamical properties of the system we assume that initially two bosons occupy the lowest state of a chosen (left) site of the double-well potential
|
Inspired by this simple observation, in this article we study the dynamical properties of two bosons confined in a one-dimensional double-well potential and initially occupying a chosen site. We numerically compare the exact, many-body dynamics of the system with the dynamics governed by simplified two-mode Hamiltonians. The comparison is performed for different interaction strengths and different depths of the modeled double-well potential.
|
This illusory conviction that a complete two-mode Hamiltonian (4) is sufficient to describe the dynamical properties of the system in the strong interaction limit has to be revisited when, instead of densities, inter-particle correlations are considered. For example, let us consider one of the simplest correlations – the probability that bosons occupy different wells of the potential. In the case of two bosons, this probability is related to the density-density correlation:
|
B
|
λc±=12(γ±γ2+4(σ+ζ))superscriptsubscript𝜆𝑐plus-or-minus12plus-or-minus𝛾superscript𝛾24𝜎𝜁\lambda_{c}^{\pm}=\frac{1}{2}\left(\gamma\pm\sqrt{\gamma^{2}+4(\sigma+\zeta)}\right)italic_λ start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ± end_POSTSUPERSCRIPT = divide start_ARG 1 end_ARG start_ARG 2 end_ARG ( italic_γ ± square-root start_ARG italic_γ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT + 4 ( italic_σ + italic_ζ ) end_ARG )
|
ζ>γ24𝜁superscript𝛾24\zeta>\frac{\gamma^{2}}{4}italic_ζ > divide start_ARG italic_γ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT end_ARG start_ARG 4 end_ARG
|
Expanding (γ2−4ζ>0superscript𝛾24𝜁0\gamma^{2}-4\zeta>0italic_γ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT - 4 italic_ζ > 0)
|
Expanding (γ2−4ζ<0superscript𝛾24𝜁0\gamma^{2}-4\zeta<0italic_γ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT - 4 italic_ζ < 0)
|
γ2<4ζsuperscript𝛾24𝜁\gamma^{2}<4\zetaitalic_γ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT < 4 italic_ζ, the expanding eigenvalues λeR±iλeIplus-or-minussuperscriptsubscript𝜆𝑒𝑅𝑖superscriptsubscript𝜆𝑒𝐼\lambda_{e}^{R}\pm{i}\lambda_{e}^{I}italic_λ start_POSTSUBSCRIPT italic_e end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_R end_POSTSUPERSCRIPT ± italic_i italic_λ start_POSTSUBSCRIPT italic_e end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_I end_POSTSUPERSCRIPT are complex,
|
B
|
LagoudakisNphys2008 ; RoumposNphys2011 ; NardinNphys2011 ; SanvittoNphot2011 ; DominiciSA2015 ; BoulierPRL2016 ; caputo2016topological ; caputo2019josephson
|
and f1subscript𝑓1f_{1}italic_f start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT is the spin-conserved and spin-exchange polariton-polariton
|
and f2subscript𝑓2f_{2}italic_f start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT are the same-spin and cross-spin nonradiative loss rates, respectively.
|
σ=±𝜎plus-or-minus\sigma=\pmitalic_σ = ± representing the spin state of polaritons with effective mass
|
In the absence of external magnetic field the “spin-up” and “spin-down” states σ=±𝜎plus-or-minus\sigma=\pmitalic_σ = ± of noninteracting polaritons, or their linearly
|
D
|
Let {a,b,γ,δ}𝑎𝑏𝛾𝛿\{a,b,\gamma,\delta\}{ italic_a , italic_b , italic_γ , italic_δ } be an unbroken coupled SUSY, ℒℒ\mathcal{L}caligraphic_L and 𝒜𝒜\mathcal{A}caligraphic_A be as above, and, also as above, kera={ψi,0:i∈I}kernel𝑎conditional-setsubscript𝜓𝑖0𝑖𝐼\ker a=\{\psi_{i,0}:i\in I\}roman_ker italic_a = { italic_ψ start_POSTSUBSCRIPT italic_i , 0 end_POSTSUBSCRIPT : italic_i ∈ italic_I } for some index set I𝐼Iitalic_I. An uncertainty principle holds for ℒℒ\mathcal{L}caligraphic_L and 𝒜𝒜\mathcal{A}caligraphic_A and the minimizers are the states ψi,0subscript𝜓𝑖0\psi_{i,0}italic_ψ start_POSTSUBSCRIPT italic_i , 0 end_POSTSUBSCRIPT.
|
Let ψ𝜓\psiitalic_ψ be a normalized wavefunction. Note that Robertson’s uncertainty relation gives us that
|
That ℋ2=ℋ1+1subscriptℋ2subscriptℋ11\mathcal{H}_{2}=\mathcal{H}_{1}+1caligraphic_H start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT = caligraphic_H start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT + 1 is a restatement of the commutation relation for a𝑎aitalic_a and a∗superscript𝑎a^{*}italic_a start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT—which is equivalent to the canonical commutation relation. This cannot serve as a point of generalization as the canonical commutation relation is too rigid [28, p. 274]. Instead, we use the property that the QMHO and its partner Hamiltonian both have two distinct factorizations to develop our theory and this leads into our first definition.
|
The canonical uncertainty principle in quantum mechanics is the Heisenberg uncertainty principle which is an uncertainty principle between the position operator x𝑥xitalic_x and the momentum operator p𝑝pitalic_p. The Heisenberg uncertainty principle says that, in natural units, the standard deviation in
|
Let Ψ=(ψ1,ψ2)TΨsuperscriptsubscript𝜓1subscript𝜓2T\Psi=(\psi_{1},\psi_{2})^{\operatorname{T}}roman_Ψ = ( italic_ψ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_ψ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ) start_POSTSUPERSCRIPT roman_T end_POSTSUPERSCRIPT be the state in which we are evaluating the expectation, then 1=‖ψ1‖2+‖ψ2‖21superscriptnormsubscript𝜓12superscriptnormsubscript𝜓221=\|\psi_{1}\|^{2}+\|\psi_{2}\|^{2}1 = ∥ italic_ψ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ∥ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT + ∥ italic_ψ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ∥ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT. Again making use of Robertson’s inequality, we have that
|
A
|
\frac{2}{3}v(t)\>\>.italic_w ( italic_t ) = divide start_ARG 1 end_ARG start_ARG 4 italic_π end_ARG ∫ start_POSTSUBSCRIPT italic_S start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT end_POSTSUBSCRIPT | bold_v start_POSTSUBSCRIPT italic_e start_POSTSUPERSCRIPT ± end_POSTSUPERSCRIPT end_POSTSUBSCRIPT ( italic_t ) - divide start_ARG bold_v start_POSTSUBSCRIPT italic_e start_POSTSUPERSCRIPT - end_POSTSUPERSCRIPT end_POSTSUBSCRIPT ( italic_t ) + bold_v start_POSTSUBSCRIPT italic_e start_POSTSUPERSCRIPT + end_POSTSUPERSCRIPT end_POSTSUBSCRIPT ( italic_t ) end_ARG start_ARG 2 end_ARG | roman_d roman_Ω = divide start_ARG 1 end_ARG start_ARG 4 italic_π end_ARG ∫ start_POSTSUBSCRIPT italic_S start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT end_POSTSUBSCRIPT divide start_ARG | bold_v start_POSTSUBSCRIPT italic_e start_POSTSUPERSCRIPT - end_POSTSUPERSCRIPT end_POSTSUBSCRIPT ( italic_t ) - bold_v start_POSTSUBSCRIPT italic_e start_POSTSUPERSCRIPT + end_POSTSUPERSCRIPT end_POSTSUBSCRIPT ( italic_t ) | end_ARG start_ARG 2 end_ARG roman_d roman_Ω = divide start_ARG 2 end_ARG start_ARG 3 end_ARG italic_v ( italic_t ) .
|
where t1≈70.000subscript𝑡170.000t_{1}\approx 70.000italic_t start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ≈ 70.000 a and V1=V0⋅(t1/t0)3/2>0subscript𝑉1⋅subscript𝑉0superscriptsubscript𝑡1subscript𝑡0320V_{1}=V_{0}\cdot(t_{1}/t_{0})^{3/2}>0italic_V start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT = italic_V start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT ⋅ ( italic_t start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT / italic_t start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT ) start_POSTSUPERSCRIPT 3 / 2 end_POSTSUPERSCRIPT > 0 but now
|
⪅t0⪅10less-than-or-approximately-equalsabsentsubscript𝑡0less-than-or-approximately-equals10\lessapprox t_{0}\lessapprox 10⪅ italic_t start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT ⪅ 10 s for the initial value of time and
|
⪅t⪅1.38×1010less-than-or-approximately-equalsabsent𝑡less-than-or-approximately-equals1.38superscript1010\lessapprox t\lessapprox 1.38\times 10^{10}⪅ italic_t ⪅ 1.38 × 10 start_POSTSUPERSCRIPT 10 end_POSTSUPERSCRIPT a, then R(t)∼t2/3similar-to𝑅𝑡superscript𝑡23R(t)\sim t^{2/3}italic_R ( italic_t ) ∼ italic_t start_POSTSUPERSCRIPT 2 / 3 end_POSTSUPERSCRIPT
|
Consider first the radiation epoch 10101010 s ⪅t⪅70.000less-than-or-approximately-equalsabsent𝑡less-than-or-approximately-equals70.000\lessapprox t\lessapprox 70.000⪅ italic_t ⪅ 70.000 a (here “a” stands for “years” as usual).
|
D
|
_{I}-U\left|\psi\right\rangle_{I})\left|1\right\rangle| italic_ψ ⟩ start_POSTSUBSCRIPT italic_F end_POSTSUBSCRIPT = divide start_ARG 1 end_ARG start_ARG 2 end_ARG ( | italic_ψ ⟩ start_POSTSUBSCRIPT italic_I end_POSTSUBSCRIPT + italic_U | italic_ψ ⟩ start_POSTSUBSCRIPT italic_I end_POSTSUBSCRIPT ) | 0 ⟩ + divide start_ARG 1 end_ARG start_ARG 2 end_ARG ( | italic_ψ ⟩ start_POSTSUBSCRIPT italic_I end_POSTSUBSCRIPT - italic_U | italic_ψ ⟩ start_POSTSUBSCRIPT italic_I end_POSTSUBSCRIPT ) | 1 ⟩
|
We now measure the ancilla qubit in the computational basis. If the result is |0⟩ket0\left|0\right\rangle| 0 ⟩ then the input state becomes
|
while if the measured outcome of the ancilla is |1⟩ket1\left|1\right\rangle| 1 ⟩ then the input state becomes
|
A quantum circuit that would encode such a state with three qubits will start with three quantum states, the first encoding the original qubit state, and another two ancilla qubits initialised to |0⟩ket0\left|0\right\rangle| 0 ⟩. Two CNOT gates will couple the first qubit state to the second |0⟩ket0\left|0\right\rangle| 0 ⟩ state and the second |0⟩ket0\left|0\right\rangle| 0 ⟩ state to the third such that, in the end, the logical qubit will be encoded on three qubits. This code features a binary distance between the two codeword states and hence is capable of correcting for a single bit flip error. It is necessary to have three physical bit flips in order to transform the logical state from |0⟩Lsubscriptket0𝐿\left|0\right\rangle_{L}| 0 ⟩ start_POSTSUBSCRIPT italic_L end_POSTSUBSCRIPT to |1⟩Lsubscriptket1𝐿\left|1\right\rangle_{L}| 1 ⟩ start_POSTSUBSCRIPT italic_L end_POSTSUBSCRIPT. Therefore if we assume |ψ⟩=|0⟩Lket𝜓subscriptket0𝐿\left|\psi\right\rangle=\left|0\right\rangle_{L}| italic_ψ ⟩ = | 0 ⟩ start_POSTSUBSCRIPT italic_L end_POSTSUBSCRIPT, then with one single bit flip we will obtain a final state that still remains closer to |0⟩Lsubscriptket0𝐿\left|0\right\rangle_{L}| 0 ⟩ start_POSTSUBSCRIPT italic_L end_POSTSUBSCRIPT. The distance between two codeword states, d𝑑ditalic_d, is related to the number of errors that can be corrected, t𝑡titalic_t, by means of the relation
|
The error correction prescription on the other side will need some additional ancilla qubits, because we cannot directly measure the logical state without destroying it. Those ancilla qubits are used to extract the syndrome information related to possible errors without discriminating the state of any qubit. The error correction connects the physical qubits to the new ancilla qubits by means of CNOT gates which check the parity of the three-qubit data block. In any case, there is either no error, or a single bit-flip error and in both cases the ancilla qubits are flipped to one unique state based on the parity of the data block. These qubits are then measured and provide the syndrome of the error. This will then allow us to apply the correction gate in a meaningful way. In order to correct for both bit and phase flip, the nine-qubits code may be employed. Other generalisations are possible but the simple discussion up to this point suffices for the matter at hand. Describing error correction codes from the perspective of the quantum state is often cumbersome and inefficient as the state representations and the circuits themselves will differ from code to code. The error correction prescription however can be described in a unified way by means of the so called stabiliser formalism [22], [23]. The basic idea is to describe quantum states in terms of operators. Given a state |ψ⟩ket𝜓\left|\psi\right\rangle| italic_ψ ⟩, one can say it is being stabilised by some operator K𝐾Kitalic_K if that state is an +11+1+ 1 eigenstate of K𝐾Kitalic_K namely K|ψ⟩=|ψ⟩𝐾ket𝜓ket𝜓K\left|\psi\right\rangle=\left|\psi\right\rangleitalic_K | italic_ψ ⟩ = | italic_ψ ⟩. A multi-qubit state will be described in an operatorial sense by analysing the group properties of the multi-qubit operators acting as stabilisers. Given the Pauli group for N𝑁Nitalic_N-qubits 𝒫Nsubscript𝒫𝑁\mathcal{P}_{N}caligraphic_P start_POSTSUBSCRIPT italic_N end_POSTSUBSCRIPT, an N𝑁Nitalic_N-qubit stabiliser state is defined by the N𝑁Nitalic_N generators of an Abelian subgroup 𝒢𝒢\mathcal{G}caligraphic_G of the N𝑁Nitalic_N-qubit Pauli group that satisfies
|
A
|
Note that the curvature tensor Rgsubscript𝑅𝑔R_{g}italic_R start_POSTSUBSCRIPT italic_g end_POSTSUBSCRIPT of an Einstein manifold
|
Rgsubscript𝑅𝑔R_{g}italic_R start_POSTSUBSCRIPT italic_g end_POSTSUBSCRIPT of (M,g)𝑀𝑔(M,g)( italic_M , italic_g ) as an operator in (5) is bounded which means that
|
Concerning the Einstein condition if Rgsubscript𝑅𝑔R_{g}italic_R start_POSTSUBSCRIPT italic_g end_POSTSUBSCRIPT comes from an Einstein metric
|
(M,g)𝑀𝑔(M,g)( italic_M , italic_g ) if bounded as an operator always solves the quantum vacuum Einstein
|
QM∈ℜsubscript𝑄𝑀ℜQ_{M}\in{\mathfrak{R}}italic_Q start_POSTSUBSCRIPT italic_M end_POSTSUBSCRIPT ∈ fraktur_R satisfying the quantum vacuum Einstein equation as in
|
C
|
The HR spacetime haggard_quantum-gravity_2015 ; de_lorenzo_improved_2016 constructed below provides a minimalistic model for a geometry where there is a transition of a trapped region (formed by collapsing matter) to an anti–trapped region (from which matter is released). The transition is assumed to happen through quantum gravitational effects that are non negligible only in a finite spatiotemporal region.
|
In this section we construct what we call here the Haggard-Rovelli spacetime. We follow a novel route for its construction that is adapted to the needs of the calculation and is more precise and conceptually clear. Note that the use of the word ‘spacetime’ here is an abuse of terminology as this spacetime has a region missing, which is to be imagined as the slot where the LQG transition amplitude will go, see Figure 1. The important point will be that the exterior to this excised region geometry will be parametrised by two parameters, the bounce time T𝑇Titalic_T and the mass m𝑚mitalic_m. There are four main regions in the spacetime, described by corresponding coordinate patches given explicitly below. With reference to Figure 2, region I𝐼Iitalic_I is the flat interior of a collapsing spherical shell. Region II𝐼𝐼IIitalic_I italic_I contains the trapped surface formed by the collapsing shell. Region III𝐼𝐼𝐼IIIitalic_I italic_I italic_I contains the antitrapped surface formed by an expanding spherical null shell, while region IV𝐼𝑉IVitalic_I italic_V is the flat interior of this shell.
|
Figure 1: Geometry transition as a path integral over geometries. The shaded region (pale green) is where the quantum transition occurs. Outside this compact spacetime region, quantum theory can be disregarded and the geometry is a solution of Einstein’s equations. This induces an intrinsic metric q𝑞qitalic_q and extrinsic curvature K𝐾Kitalic_K of the boundary surfaces (dark green). The boundary state for the sum over geometries is a semiclassical state, peaked on both q𝑞qitalic_q and K𝐾Kitalic_K. The amplitudes of covariant LQG employed here display an emergent behavior as a Wheeler–Misner–Hawking sum in the limit of large quantum numbers.
|
The transition region is excised from spacetime, by introducing a spacelike compact interior boundary, which surrounds the quantum region. Outside this region the metric solves Einstein’s field equations exactly everywhere, including on the interior boundary.
|
The key technical result in haggard_quantum-gravity_2015 is the discovery of an ‘exterior metric’ describing this process which solves Einstein’s field equations exactly everywhere, except for the transition region which is bounded by a compact boundary. The existence of this exterior metric, which we henceforth refer to as the Haggard–Rovelli (HR) metric,111The HR metric is similar to the spacetimes considered in barcelo_mutiny_2014 ; barcelo_lifetime_2015 ; barcelo_black_2016 ; barcelo_exponential_2016 ; hajicek_singularity_2001 ; ambrus_quantum_2005-1 . renders this process plausible: General Relativity need only be violated in a compact spacetime region, and this is something that quantum theory allows. The stability of the exterior spacetime after the quantum transition was studied in de_lorenzo_improved_2016 . The known instabilities of white hole spacetimes were shown to possibly limit the duration of the anti–trapped phase, but do not seem to otherwise forbid the transition from taking place.
|
C
|
(2) L→∞→𝐿L\to\inftyitalic_L → ∞ with fixed α≪1much-less-than𝛼1\alpha\ll 1italic_α ≪ 1 and fixed particle
|
Any increase in P2subscript𝑃2P_{2}italic_P start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT should be accompanied
|
momentum xeEsubscript𝑥𝑒𝐸x_{e}Eitalic_x start_POSTSUBSCRIPT italic_e end_POSTSUBSCRIPT italic_E shown in the second diagram should be negative,
|
roughly speaking, related to the probability P2subscript𝑃2P_{2}italic_P start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT of one splitting
|
should be taken to be α(Q⟂)𝛼subscript𝑄perpendicular-to\alpha(Q_{\perp})italic_α ( italic_Q start_POSTSUBSCRIPT ⟂ end_POSTSUBSCRIPT ), where Q⟂subscript𝑄perpendicular-toQ_{\perp}italic_Q start_POSTSUBSCRIPT ⟂ end_POSTSUBSCRIPT is the
|
A
|
In Section 2, we introduce the twisted affine Lie algebra L^(𝔤,σ)^𝐿𝔤𝜎\hat{L}(\mathfrak{g},\sigma)over^ start_ARG italic_L end_ARG ( fraktur_g , italic_σ ) attached to a finite order automorphism σ𝜎\sigmaitalic_σ of 𝔤𝔤\mathfrak{g}fraktur_g following [Ka, Chap. 8]. We prove some preparatory lemmas which is used later in Section 4.
|
In Section 3, we define the space of twisted covacua attached to a Galois cover of an algebraic curve. We prove that this space is finite dimensional under the assumption given in Definition 3.5.
|
In this section we define the space of twisted covacua attached to a Galois cover of an algebraic curve. We prove that this space is finite dimensional.
|
The aim of this section is to prove the Factorization Theorem which identifies the space of covacua for a genus g𝑔gitalic_g nodal curve
|
As proved in Lemma 3.7, the space of twisted covacua is finite dimensional. We sheafify the notion of twisted covacua associated to a family of s𝑠sitalic_s-pointed ΓΓ\Gammaroman_Γ-curves as in Definition 7.7 and show that
|
A
|
^{2}\Phi_{Q}\Bigg{\}}.+ 40 roman_Φ start_POSTSUBSCRIPT italic_Q end_POSTSUBSCRIPT roman_csc start_POSTSUPERSCRIPT 5 end_POSTSUPERSCRIPT roman_Φ start_POSTSUBSCRIPT italic_P end_POSTSUBSCRIPT roman_cot roman_Φ start_POSTSUBSCRIPT italic_Q end_POSTSUBSCRIPT [ 12 roman_Φ start_POSTSUBSCRIPT italic_P end_POSTSUBSCRIPT roman_cos roman_Φ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT - 9 roman_sin roman_Φ start_POSTSUBSCRIPT italic_P end_POSTSUBSCRIPT - roman_sin ( 3 roman_Φ start_POSTSUBSCRIPT italic_P end_POSTSUBSCRIPT ) ] - 224 roman_Φ start_POSTSUBSCRIPT italic_Q end_POSTSUBSCRIPT roman_cot roman_Φ start_POSTSUBSCRIPT italic_Q end_POSTSUBSCRIPT roman_csc start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT roman_Φ start_POSTSUBSCRIPT italic_Q end_POSTSUBSCRIPT } .
|
The integral over the three-dimensional angle θ𝜃\thetaitalic_θ can then be performed analytically, which yields a more manageable expression,
|
where x≡P/Q𝑥𝑃𝑄x\equiv P/Qitalic_x ≡ italic_P / italic_Q and θ𝜃\thetaitalic_θ is the angle between 𝒑𝒑\boldsymbol{p}bold_italic_p and 𝒒𝒒\boldsymbol{q}bold_italic_q.
|
We present here some details of the calculation discussed in the main text. In particular, to carry out the four-momentum integrations such as ∫d4Psuperscriptd4𝑃\int\operatorname{d}\!^{4}P∫ roman_d start_POSTSUPERSCRIPT 4 end_POSTSUPERSCRIPT italic_P, we find it very useful to change variables from (P0,|𝒑|)superscript𝑃0𝒑(P^{0},|\boldsymbol{p}|)( italic_P start_POSTSUPERSCRIPT 0 end_POSTSUPERSCRIPT , | bold_italic_p | ) to (P,ΦP)𝑃subscriptΦ𝑃(P,\Phi_{P})( italic_P , roman_Φ start_POSTSUBSCRIPT italic_P end_POSTSUBSCRIPT ), where P𝑃Pitalic_P is the magnitude of the Euclidean four-vector and ΦPsubscriptΦ𝑃\Phi_{P}roman_Φ start_POSTSUBSCRIPT italic_P end_POSTSUBSCRIPT is the four-dimensional polar angle, tanΦP=|𝒑|/P0subscriptΦ𝑃𝒑superscript𝑃0{\tan\Phi_{P}=|\boldsymbol{p}|/P^{0}}roman_tan roman_Φ start_POSTSUBSCRIPT italic_P end_POSTSUBSCRIPT = | bold_italic_p | / italic_P start_POSTSUPERSCRIPT 0 end_POSTSUPERSCRIPT. The particular expression that we use in these coordinates is only valid for 0≤ΦP≤π/20subscriptΦ𝑃𝜋2{0\leq\Phi_{P}\leq\pi/2}0 ≤ roman_Φ start_POSTSUBSCRIPT italic_P end_POSTSUBSCRIPT ≤ italic_π / 2, but due to the symmetry of the self energy under P0↦−P0maps-tosuperscript𝑃0superscript𝑃0P^{0}\mapsto-P^{0}italic_P start_POSTSUPERSCRIPT 0 end_POSTSUPERSCRIPT ↦ - italic_P start_POSTSUPERSCRIPT 0 end_POSTSUPERSCRIPT, cf. eqs. (5) and (6) of the main text, one may use the measure
|
As mentioned in the main text, the starting point in our N3LO computation is the two-loop HTL pressure as written down in eq. (34) of ref. [27], where we convert the sum-integrals into ordinary 3+1 dimensional integrals because we work at T=0𝑇0T=0italic_T = 0. The full expression is rather unwieldy when written in full, and is not reproduced here, but we note that the simplifications outlined in the main text make extracting the double logarithm significantly easier. An additional useful result is that, in the notation of ref. [27], the propagator ΔX=𝒪(αs)subscriptΔ𝑋𝒪subscript𝛼𝑠\Delta_{X}=\mathcal{O}(\alpha_{s})roman_Δ start_POSTSUBSCRIPT italic_X end_POSTSUBSCRIPT = caligraphic_O ( italic_α start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT ), which allows us to discard a number of higher-order terms.
|
A
|
=9(1+ττ¯)+16ττ¯+15(τ1¯+1τ¯),absent91𝜏¯𝜏16𝜏¯𝜏15𝜏¯11¯𝜏\displaystyle=9(1+\tau\bar{\tau})+16\tau\bar{\tau}+15(\tau\bar{1}+1\bar{\tau}),= 9 ( 1 + italic_τ over¯ start_ARG italic_τ end_ARG ) + 16 italic_τ over¯ start_ARG italic_τ end_ARG + 15 ( italic_τ over¯ start_ARG 1 end_ARG + 1 over¯ start_ARG italic_τ end_ARG ) ,
|
Comparing the equations above with Table 1, we find that the state counting in Table 1 actually counts the number of fusion channels to trivial fluxons 1111 and fluxons ττ¯𝜏¯𝜏\tau\bar{\tau}italic_τ over¯ start_ARG italic_τ end_ARG appearing on the RHS of each of the equations above. If we did the state counting in this way, we would be using the method of counting by fusion channels, mentioned in the beginning of Section 3. Nonetheless, in general cases with many different boundary conditions, it is not easy to decide which fusion channels we should select.777According to Ref. Hung and Wan (2015), anyons condensing at a boundary can appear with multiplicities, such that they contribute to ground states multiple times. Hence, it is better to find a more systematic method.
|
The state counting using the extended Levin-Wen model therefore tells us which subspace of a multi-fluxon Hilbert space should be singled out as the physical Hilbert space. This result complies with that the gapped boundary of the disk is due to condensing ττ¯𝜏¯𝜏\tau\bar{\tau}italic_τ over¯ start_ARG italic_τ end_ARG at the boundary. On a sphere, the Hibert space of multi-ττ¯𝜏¯𝜏\tau\bar{\tau}italic_τ over¯ start_ARG italic_τ end_ARG consists of the fusion channels of these ττ¯𝜏¯𝜏\tau\bar{\tau}italic_τ over¯ start_ARG italic_τ end_ARG into the trivial anyon 1111. That is, the total topological charge of the system must be trivial. On a disk, however, according to Ref.Hung and Wan (2015), when a ττ¯𝜏¯𝜏\tau\bar{\tau}italic_τ over¯ start_ARG italic_τ end_ARG condenses at the boundary, it also contributes a copy of the trivial anyon 1111. Thus, the physical Hilbert space of multi-ττ¯𝜏¯𝜏\tau\bar{\tau}italic_τ over¯ start_ARG italic_τ end_ARG on a disk consists of the fusion channels of these ττ¯𝜏¯𝜏\tau\bar{\tau}italic_τ over¯ start_ARG italic_τ end_ARG into either 1111 or ττ¯𝜏¯𝜏\tau\bar{\tau}italic_τ over¯ start_ARG italic_τ end_ARG; the final fusion product ττ¯𝜏¯𝜏\tau\bar{\tau}italic_τ over¯ start_ARG italic_τ end_ARG is the total charge of the real ττ¯𝜏¯𝜏\tau\bar{\tau}italic_τ over¯ start_ARG italic_τ end_ARG’s and must be annihilated at the boundary, as otherwise there would be excessive fluxons in the system. This discourse motivates the basis in Fig. 3. Since, fusion is associative, we can always present the fusion of multiple fluxons ττ¯𝜏¯𝜏\tau\bar{\tau}italic_τ over¯ start_ARG italic_τ end_ARG as a tree graph as in Fig. 3.
|
We compute using Eq. (16) and the precisely the lattice in Fig. 2, namely with P=5𝑃5P=5italic_P = 5, and obtain the state counting in Table 1. If we increase the plaquette number P𝑃Pitalic_P, we can obtain the state counting for larger Nττ¯subscript𝑁𝜏¯𝜏N_{\tau\bar{\tau}}italic_N start_POSTSUBSCRIPT italic_τ over¯ start_ARG italic_τ end_ARG end_POSTSUBSCRIPT, which are neglected into the ‘⋯⋯\cdots⋯’ in the table.
|
Nevertheless, materials with boundaries are much easier to fabricate than closed ones. Understanding the anyonic exclusion statistics in topologically ordered states with boundaries is thus important. For such a system to have a well-defined, topologically protected, degenerate ground-state Hilbert space, which may support a robust quantum memory and quantum computingKitaev (2003, 2006), the systems with gapped boundaries are of most interest. A recent workHung and Wan (2015) has shown how gapped boundary conditions of a topological order dictate the ground state degeneracy of the topological order and how certain anyons in the bulk may connect to the gapped boundary. The fusion space structure of multiple anyons is closely related to the boundary conditions, which select only certain anyons that can move to the gapped boundary without any energy cost333In other words, these anyons condense at the boundaryKitaev and Kong (2012); Levin (2013); Hung and Wan (2013, 2015).. Hence we expect that the boundary conditions of a topological order would affect the state counting of the anyons.
|
A
|
K(x)∈ℕ∪{+∞}𝐾𝑥ℕK(x)\in{\mathbb{N}}\cup\{+\infty\}italic_K ( italic_x ) ∈ blackboard_N ∪ { + ∞ } the Kolmogorov complexity of x∈ℝ𝑥ℝx\in{\mathbb{R}}italic_x ∈ blackboard_R.
|
famous ΩΩ\Omegaroman_Ω number [6, Section 14.8]) while K(x)<+∞𝐾𝑥K(x)<+\inftyitalic_K ( italic_x ) < + ∞
|
whether K(x)=+∞𝐾𝑥K(x)=+\inftyitalic_K ( italic_x ) = + ∞ or K(x)<+∞𝐾𝑥K(x)<+\inftyitalic_K ( italic_x ) < + ∞ and in the latter case only the
|
Then K(x)=+∞𝐾𝑥K(x)=+\inftyitalic_K ( italic_x ) = + ∞ corresponds to the situation when no algorithms
|
stationary or single) black hole such that ∂M=Σ𝑀Σ\partial M=\Sigma∂ italic_M = roman_Σ corresponds to
|
C
|
Firstly, we have the ones based on the ℒ∞subscriptℒ\mathcal{L}_{\infty}caligraphic_L start_POSTSUBSCRIPT ∞ end_POSTSUBSCRIPT structure on linearized contact homology.
|
In the case that (X,ω)𝑋𝜔(X,\omega)( italic_X , italic_ω ) is a filling of (Y,α)𝑌𝛼(Y,\alpha)( italic_Y , italic_α ), we get an ℒ∞subscriptℒ\mathcal{L}_{\infty}caligraphic_L start_POSTSUBSCRIPT ∞ end_POSTSUBSCRIPT augmentation
|
Suppose that (X,ω)𝑋𝜔(X,\omega)( italic_X , italic_ω ) is a symplectic filling of (Y,α)𝑌𝛼(Y,\alpha)( italic_Y , italic_α ).
|
Given (X,ω)𝑋𝜔(X,\omega)( italic_X , italic_ω ) a symplectic filling of (Y,α)𝑌𝛼(Y,\alpha)( italic_Y , italic_α ),
|
Let (Y,α)𝑌𝛼(Y,\alpha)( italic_Y , italic_α ) be a strict contact manifold, with symplectic filling (X,ω)𝑋𝜔(X,\omega)( italic_X , italic_ω ). As discussed in §3.4, the filling induces an ℒ∞subscriptℒ\mathcal{L}_{\infty}caligraphic_L start_POSTSUBSCRIPT ∞ end_POSTSUBSCRIPT augmentation
|
C
|
∘{}^{\circ}start_FLOATSUPERSCRIPT ∘ end_FLOATSUPERSCRIPT S+subscript𝑆S_{+}italic_S start_POSTSUBSCRIPT + end_POSTSUBSCRIPT means the star is on the south pole in the spin-(s+1/2𝑠12s+1/2italic_s + 1 / 2) sector.
|
◇◇{}^{\Diamond}start_FLOATSUPERSCRIPT ◇ end_FLOATSUPERSCRIPT “Complex” means the stars’ distribution is complex.
|
∙∙{}^{\bullet}start_FLOATSUPERSCRIPT ∙ end_FLOATSUPERSCRIPT E𝐸Eitalic_E means the star of the pseudo spin is on the equator.
|
∘{}^{\circ}start_FLOATSUPERSCRIPT ∘ end_FLOATSUPERSCRIPT S+subscript𝑆S_{+}italic_S start_POSTSUBSCRIPT + end_POSTSUBSCRIPT means the star is on the south pole in the spin-(s+1/2𝑠12s+1/2italic_s + 1 / 2) sector.
|
⋆⋆{}^{\star}start_FLOATSUPERSCRIPT ⋆ end_FLOATSUPERSCRIPT + (-) means the star is in the spin-(s+1/2𝑠12s+1/2italic_s + 1 / 2) (spin-(s−1/2𝑠12s-1/2italic_s - 1 / 2)) sector.
|
B
|
\times\left(\eta\nabla\times\mathbf{B}\right),\,\,\nabla\cdot\mathbf{B}=0over˙ start_ARG bold_B end_ARG = ∇ × ( bold_v × bold_B ) - ∇ × ( italic_η ∇ × bold_B ) , ∇ ⋅ bold_B = 0
|
Note that the five magnetic terms involving μ0subscript𝜇0\mu_{0}italic_μ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT in the final
|
∂Ue∂rsuperscript𝑈𝑒𝑟\displaystyle\frac{\partial U^{e}}{\partial r}divide start_ARG ∂ italic_U start_POSTSUPERSCRIPT italic_e end_POSTSUPERSCRIPT end_ARG start_ARG ∂ italic_r end_ARG
|
Here, a dot above a symbol implies a partial time derivative, μ0subscript𝜇0\mu_{0}italic_μ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT
|
∂Ue∂zsuperscript𝑈𝑒𝑧\displaystyle\frac{\partial U^{e}}{\partial z}divide start_ARG ∂ italic_U start_POSTSUPERSCRIPT italic_e end_POSTSUPERSCRIPT end_ARG start_ARG ∂ italic_z end_ARG
|
C
|
Denote by θ𝜃\thetaitalic_θ any positive quantity that is small enough depending on δ𝛿\deltaitalic_δ (for example θ≪δ50much-less-than𝜃superscript𝛿50\theta\ll\delta^{50}italic_θ ≪ italic_δ start_POSTSUPERSCRIPT 50 end_POSTSUPERSCRIPT). This θ𝜃\thetaitalic_θ may take different values at different places. Let C𝐶Citalic_C be any large absolute constant depending only on r𝑟ritalic_r, and Cθsubscript𝐶𝜃C_{\theta}italic_C start_POSTSUBSCRIPT italic_θ end_POSTSUBSCRIPT be any constant depending on θ𝜃\thetaitalic_θ. Unless otherwise stated, the constants in the ≲less-than-or-similar-to\lesssim≲, ≪much-less-than\ll≪ and O(⋅)𝑂⋅O(\cdot)italic_O ( ⋅ ) symbols will depend on Cθsubscript𝐶𝜃C_{\theta}italic_C start_POSTSUBSCRIPT italic_θ end_POSTSUBSCRIPT. Finally, if some statement S𝑆Sitalic_S about a random variable holds with probability ℙ(S)≥1−Cθe−Aθℙ𝑆1subscript𝐶𝜃superscript𝑒superscript𝐴𝜃\mathbb{P}(S)\geq 1-C_{\theta}e^{-A^{\theta}}blackboard_P ( italic_S ) ≥ 1 - italic_C start_POSTSUBSCRIPT italic_θ end_POSTSUBSCRIPT italic_e start_POSTSUPERSCRIPT - italic_A start_POSTSUPERSCRIPT italic_θ end_POSTSUPERSCRIPT end_POSTSUPERSCRIPT for some quantity A>0𝐴0A>0italic_A > 0 and with given θ𝜃\thetaitalic_θ and Cθsubscript𝐶𝜃C_{\theta}italic_C start_POSTSUBSCRIPT italic_θ end_POSTSUBSCRIPT independent of A𝐴Aitalic_A, we will say this S𝑆Sitalic_S is A𝐴Aitalic_A-certain.
|
We now turn to the proof of Theorem 1.3. This proof consists of two parts: (a) proving almost sure local well-posedness for (1.1) on the support of the Gibbs measure, and (b) applying formal invariance to extend local solutions to global ones. Since part (b) is essentially an adaptation of Bourgain’s classical proof [11], we shall focus on the local theory in part (a). For exposition simplicity we will also replace W2r+1(u)superscript𝑊2𝑟1𝑢W^{2r+1}(u)italic_W start_POSTSUPERSCRIPT 2 italic_r + 1 end_POSTSUPERSCRIPT ( italic_u ) by the pure power |u|2rusuperscript𝑢2𝑟𝑢|u|^{2r}u| italic_u | start_POSTSUPERSCRIPT 2 italic_r end_POSTSUPERSCRIPT italic_u in the discussion below.
|
The heart of the proof of Proposition 3.3 is a collection of (probabilistic) multilinear estimates for 𝒩2l+1subscript𝒩2𝑙1\mathcal{N}_{2l+1}caligraphic_N start_POSTSUBSCRIPT 2 italic_l + 1 end_POSTSUBSCRIPT. We will state them in Proposition 3.4 below and show that they imply Proposition 3.3. We leave the proof of Proposition 3.4 to Section 5.
|
The rest of the paper is organized as follows. In Section 2 we introduce the gauge transform and reduce to a favorable nonlinearity, and define the norms that will be used in the proof below. In Section 3 we identify the precise structure of the solution according to the ideas of Section 1.3, and reduce the local well-posedness to some multilinear estimates, namely Proposition 3.4. In Section 4 we then set up the necessary tools (large deviation and counting estimates) needed in the proof of Proposition 3.4, and Section 5 contains the proof itself. Finally in Section 6 we apply an adapted version of Bourgain’s argument to extend local solutions to global ones and finish the proof of Theorem 1.3.
|
Proposition 3.4 will be proved in Section 5. In this section we make some preparations for the proof, namely we introduce two large deviation estimates and some counting estimates for integer lattice points.
|
C
|
^{(3)}-\pi)+\pi\xi\overline{(-2\mathrm{i}m)})- italic_μ start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT ( italic_γ ( 0 ) , italic_ξ ) = exp ( italic_π italic_ξ start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT ( - 2 roman_i italic_m ) + roman_i ( 2 italic_π italic_m start_POSTSUPERSCRIPT ( 3 ) end_POSTSUPERSCRIPT - italic_π ) + italic_π italic_ξ over¯ start_ARG ( - 2 roman_i italic_m ) end_ARG ), from the asymptotics of −μ−1(γ(0),ξ)a(γ(0),ξ)superscript𝜇1𝛾0𝜉𝑎𝛾0𝜉-\mu^{-1}(\gamma(0),\xi)a(\gamma(0),\xi)- italic_μ start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT ( italic_γ ( 0 ) , italic_ξ ) italic_a ( italic_γ ( 0 ) , italic_ξ ) as ξ→0→𝜉0\xi\to 0italic_ξ → 0 along ℍmsubscriptℍ𝑚\mathbb{H}_{m}blackboard_H start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT we conclude what we want.
|
We then conclude that the magnetic coordinate has the same jumps as the magnetic coordinate of the Ooguri-Vafa space.
|
where θmsubscript𝜃𝑚\theta_{m}italic_θ start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT is the angle parametrizing the U(1)𝑈1U(1)italic_U ( 1 ) fiber (we will call it the “magnetic angle”, hence the m𝑚mitalic_m subscript).
|
Notice that our definition of marked point uses a branch of the Arg function (with Arg(z)∈[−π,π)Arg𝑧𝜋𝜋\text{Arg}(z)\in[-\pi,\pi)Arg ( italic_z ) ∈ [ - italic_π , italic_π )), so the magnetic angle is not a priori a global continuous function of m𝑚mitalic_m. In the following, we will compute how the magnetic angle jumps when we go around a loop in the m𝑚mitalic_m parameter. We will see that it will match the jump of the magnetic angle of the Ooguri-Vafa space.
|
Hence, we see that the magnetic angle has the same monodromy as the usual Ooguri-Vafa magnetic angle.
|
D
|
The concept that the near-Sun solar wind is divided into ’quiet’ magnetic flux tubes (where near-fcesubscript𝑓𝑐𝑒f_{ce}italic_f start_POSTSUBSCRIPT italic_c italic_e end_POSTSUBSCRIPT waves are preferentially observed) and ’strong turbulence’ flux tubes where wave growth is suppressed is further supported by Figure 1 and Figure 5, where the bulk of near-fcesubscript𝑓𝑐𝑒f_{ce}italic_f start_POSTSUBSCRIPT italic_c italic_e end_POSTSUBSCRIPT wave power is observed in the center of each quiet, near-radial magnetic field region, rather than near the edges. Waves near the edges would suggest growth due to instabilities associated with mixing plasma populations (e.g. (Malaspina et al., 2015; Holmes et al., 2018)). Waves near the magnetic structure center suggest that a property of the plasma within the flux tube is driving the instability.
|
Flux tubes where magnetic field turbulence is low contain a larger outward flux of strahl electrons. Those strahl electrons cause the sunward electron core drift (in the proton frame) to increase. The combination of larger strahl flux and more sunward electron core drift set up electron distribution functions unstable to near-fcesubscript𝑓𝑐𝑒f_{ce}italic_f start_POSTSUBSCRIPT italic_c italic_e end_POSTSUBSCRIPT wave growth. Details of the specific instability and wave growth process will be explored in future work.
|
The concept that the near-Sun solar wind is divided into ’quiet’ magnetic flux tubes (where near-fcesubscript𝑓𝑐𝑒f_{ce}italic_f start_POSTSUBSCRIPT italic_c italic_e end_POSTSUBSCRIPT waves are preferentially observed) and ’strong turbulence’ flux tubes where wave growth is suppressed is further supported by Figure 1 and Figure 5, where the bulk of near-fcesubscript𝑓𝑐𝑒f_{ce}italic_f start_POSTSUBSCRIPT italic_c italic_e end_POSTSUBSCRIPT wave power is observed in the center of each quiet, near-radial magnetic field region, rather than near the edges. Waves near the edges would suggest growth due to instabilities associated with mixing plasma populations (e.g. (Malaspina et al., 2015; Holmes et al., 2018)). Waves near the magnetic structure center suggest that a property of the plasma within the flux tube is driving the instability.
|
However, this picture is incomplete. Why should flux tubes with ’quiet’ solar wind (lower magnetic turbulence, hewing closer to the Parker spiral direction) show larger strahl electron flux? Perhaps this indicates multiple coronal source region properties. Perhaps it indicates different strahl radial evolution (efficiency of focusing and/or scattering) on ’quiet’ versus ’strongly turbulent’ magnetic flux tubes. Future work will explore these possibilities.
|
The study of near-fcesubscript𝑓𝑐𝑒f_{ce}italic_f start_POSTSUBSCRIPT italic_c italic_e end_POSTSUBSCRIPT waves in the near-Sun solar wind has only just begun, and already it promises to provide insight into the regulation of electron heat flux (through improved understanding of electron population evolution and its connection with wave growth), the large-scale structure of the solar wind (by implying flux tubes of weakly turbulent magnetic field stretching back toward the Sun), and the nature of kinetic wave-particle interactions in the near-Sun solar wind.
|
C
|
1}\mathbf{e}_{1}=-1bold_e start_POSTSUBSCRIPT 12 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT = bold_e start_POSTSUBSCRIPT 12 end_POSTSUBSCRIPT bold_e start_POSTSUBSCRIPT 12 end_POSTSUBSCRIPT = - bold_e start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT bold_e start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT bold_e start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT bold_e start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT = - bold_e start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ( + 1 ) bold_e start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT = - bold_e start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT bold_e start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT = - 1.
|
𝐶𝑙3,0subscript𝐶𝑙30\mathit{Cl}_{3,0}italic_Cl start_POSTSUBSCRIPT 3 , 0 end_POSTSUBSCRIPT and 𝐶𝑙1,2subscript𝐶𝑙12\mathit{Cl}_{1,2}italic_Cl start_POSTSUBSCRIPT 1 , 2 end_POSTSUBSCRIPT) or s2−S2=0superscript𝑠2superscript𝑆20s^{2}-S^{2}=0italic_s start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT - italic_S start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT = 0 (in 𝐶𝑙0,3subscript𝐶𝑙03\mathit{Cl}_{0,3}italic_Cl start_POSTSUBSCRIPT 0 , 3 end_POSTSUBSCRIPT and
|
3 Square roots in 𝐶𝑙3,0subscript𝐶𝑙30\mathit{Cl}_{3,0}italic_Cl start_POSTSUBSCRIPT 3 , 0 end_POSTSUBSCRIPT and 𝐶𝑙1,2subscript𝐶𝑙12\mathit{Cl}_{1,2}italic_Cl start_POSTSUBSCRIPT 1 , 2 end_POSTSUBSCRIPT algebras
|
However, in 𝐶𝑙1,2subscript𝐶𝑙12\mathit{Cl}_{1,2}italic_Cl start_POSTSUBSCRIPT 1 , 2 end_POSTSUBSCRIPT similar computation gives
|
3.4 Examples for 𝐶𝑙3,0subscript𝐶𝑙30\mathit{Cl}_{3,0}italic_Cl start_POSTSUBSCRIPT 3 , 0 end_POSTSUBSCRIPT and 𝐶𝑙1,2subscript𝐶𝑙12\mathit{Cl}_{1,2}italic_Cl start_POSTSUBSCRIPT 1 , 2 end_POSTSUBSCRIPT
|
C
|
This section describes the task of generalization of odor classification under sensor drift and defines several classifier models: the SVM ensemble, neural network ensemble, skill neural network, and context+skill neural network.
|
Two processing steps were applied to the data used by all models included in this paper. The first preprocessing step was to remove all samples taken for gas 6, toluene, because there were no toluene samples in batches 3, 4, and 5. Data was too incomplete for drawing meaningful conclusions. Also, with such data missing it was not possible to construct contexts from odor samples from each class in previous batches. The second preprocessing step normalized each feature so that all values corresponding to any feature dimension of the 128 total have zero mean and unit variance as is standard practice in deep learning.
|
More specifically, natural odors consist of complex and variable mixtures of molecules present at variable concentrations [4]. Sensor variance arises from environmental dynamics of temperature, humidity, and background chemicals, all contributing to concept drift [5], as well as sensor drift arising from modification of the sensing device. The hard problem of olfaction in nature calls for the learning of new odor assocations [6]. In an attempt to capture much of this complexity, Vergara et al. [7] developed a publicly available benchmark dataset demonstrating sensor drift over a period of 36 months. This dataset offers a controlled testbed for sensor drift mitigation algorithms and thus defines the scope of this paper.
|
Sensor drift in industrial processes is one such use case. For example, sensing gases in the environment is mostly tasked to metal oxide-based sensors, chosen for their low cost and ease of use [1, 2]. An array of sensors with variable selectivities, coupled with a pattern recognition algorithm, readily recognizes a broad range of odors. The arrangement is called an artificial nose since it resembles the multiplicity of sensory neuron types in the nasal epithelium. However, while metal oxide-based sensors are economical and flexible, they are unstable over time. Changes to the response properties of sensors make it difficult to detect and identify odors in the long term, and sensors have to be recalibrated to compensate [3]. Recalibration requires collecting and labeling new samples, which is costly because a skilled operator is needed, and challenging because the experimental conditions need to be controlled precisely [3]. Recalibrating a model with unlabeled examples, called semisupervised learning, is a possible alternative but difficult to establish in practice.
|
Experiments in this paper used the gas sensor drift array dataset [7]. The data consists of 10 sequential collection periods, called batches. Every batch contains between 161161161161 to 3,60036003{,}6003 , 600 samples, and each sample is represented by a 128-dimensional feature vector; 8 features each from 16 metal oxide-based gas sensors. These features summarizing the time series sensor responses are the raw and normalized steady-state features and the exponential moving average of the increasing and decaying transients taken at three different alpha values. The experiments used six gases, ammonia, acetaldehyde, acetone, ethylene, ethanol, and toluene, presented in arbitrary order and at variable concentrations. Chemical interferents were also presented to the sensors between batches, and the time between presentations varied, both of which contributed to further sensor variability. The dataset thus exemplifies sensor variance due to contamination and variable odor concentration in a controlled setting.
|
D
|
In Fig. 7, I consider the same pulses that has large fidelity measurement in Fig. 6 but with different values of noise strength. Note that small value of noise strength, ΔΔ\Deltaroman_Δ is applicable for low temperature measurements while large value of ΔΔ\Deltaroman_Δ is applicable for high temperature measurements, probably room temperature.
|
Finally in Fig. 7, I have shown that when π𝜋\piitalic_π pulse acts in x direction, CORPSE pulse acts in y direction and SCORPSE pulse acts in z-direction in presence of arbitrary low and high temperature measurements noise conditions, large fidelity recovery can be achieved and may consider for implementing in future for electronic circuits design to minimize error.
|
The paper is organized as follows. In section II, we provide a theoretical description of the model Hamiltonian for a qubit operating under several control pulses in a random telegraph noise environment. In section IV, we analyze two main results: (i) qubits driven by a pulse in the x-direction and RTN noise acts in z-direction. (ii) individual pulse acts in the x,y and z-direction in the qubits and RTN still acts in the z-direction. Finally I conclude our results.
|
Here I find that when π𝜋\piitalic_π pulse acts in x direction, CORPSE pulse acts in y direction and SCORPSE pulse acts in z-direction in presence of arbitrary low and high temperature measurements noise condition have large fidelity recovery and may consider for implementing in future for electronic circuits design to minimize error.
|
For a more general case, I consider the pulses acting in arbitrary in x, y and z directions and show that when π𝜋\piitalic_π pulse acts in x direction, CORPSE pulse acts in y direction and SCORPSE pulse acts in z-direction in presence of arbitrary low and high temperature measurements noise condition have large fidelity recovery and may consider for implementing in future for electronic circuits design to minimize error.
|
C
|
Now, let us consider the introduction of a covariant κ𝜅\kappaitalic_κ-deformation in the Horndeski theory.
|
In our case, the inclusion of κ𝜅\kappaitalic_κ-deformation produces a solution that does not necessarily impose a specific value of the critical exponent, and the κ𝜅\kappaitalic_κ-Horndeski-Einstein field equations (8) and (12) are satisfied by the equations (15)-(17) for any value of z𝑧zitalic_z. This is in contrast with
|
The non-relativistic κ𝜅\kappaitalic_κ-deformation presented by daCosta:2020mbf ; Kaniadakis is derived through a kinetic interaction principle. In that context, the κ𝜅\kappaitalic_κ-derivative is defined in flat space by Kaniadakis
|
Here, we consider a generalization of the above flat space derivative to a curved spacetime κ𝜅\kappaitalic_κ-deformation of the relativistic covariant derivative Santos:2022fbq
|
Here, we reanalyse black brane thermodynamics in asymptotically AdS Lifshitz spacetimes within Horndeski gravity modified by the recently proposed κ𝜅\kappaitalic_κ-deformation Santos:2022fbq of the corresponding kinetic terms. We show that this proposal leads to a generalized entropy for black branes compatible with the area law. The introduction of the κ𝜅\kappaitalic_κ-deformation releases the constraints of Horndeski theory allowing for arbitrary values of the critical exponents.
|
B
|
In the first place, one can find that the coherence decreases to zero (minimum, under the latter channel) and emerges certain peaks after the first Lee-Yang singularity, of which values decrease dramatically with the bath size increasing toward the thermodynamic limit.
|
Times in correspondence to the Lee-Yang zeros are the centers of all the vanishing domains of the rescaled concurrence at low temperature.
|
Furthermore, one can find that times in correspondence to the Lee-Yang zeros are also the zeros of the coherence.
|
Furthermore, one can find that times in correspondence to the Lee-Yang zeros are also the zeros of the coherence.
|
Times in correspondence to the Lee-Yang zeros are the centers of all the vanishing domains of the rescaled concurrence at low temperature.
|
B
|
\tilde{Q},\tilde{W}}^{\tilde{\operatorname{\mathcal{S}}},\psi}=0fraktur_P start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT start_OPFUNCTION bold_H caligraphic_A end_OPFUNCTION start_POSTSUBSCRIPT over~ start_ARG italic_Q end_ARG , over~ start_ARG italic_W end_ARG end_POSTSUBSCRIPT start_POSTSUPERSCRIPT over~ start_ARG caligraphic_S end_ARG , italic_ψ end_POSTSUPERSCRIPT = 0. In general, perverse degrees with respect to the new filtration are lower than for the old one. It is for these two reasons that we call the new filtration the less perverse filtration.
|
We call the filtration introduced in Theorem A the less perverse filtration, in order to distinguish it from a different perverse filtration, that was introduced in [DM20] on the way towards the definition of BPS sheaves, which recalled in (25). This is a perverse filtration on the critical CoHA 𝐇𝒜Q~,W~𝒮~subscriptsuperscript𝐇𝒜~𝒮~𝑄~𝑊\operatorname{\operatorname{\mathbf{H}}\!\mathcal{A}}^{\widetilde{%
|
We can generalise the results of this paper, incorporating deformed potentials as introduced in joint work with Tudor Pădurariu [DP22]. We indicate how this goes in this section. We will not use this generalisation of the less perverse filtration, except in the statement of Proposition 6.9 and the example of §7.2.1.
|
\operatorname{\mathcal{S}},G,\upzeta}start_OPFUNCTION bold_H caligraphic_A end_OPFUNCTION start_POSTSUBSCRIPT roman_Π start_POSTSUBSCRIPT italic_Q end_POSTSUBSCRIPT , roman_θ end_POSTSUBSCRIPT start_POSTSUPERSCRIPT caligraphic_S , italic_G , roman_ζ end_POSTSUPERSCRIPT carries a Hall algebra structure introduced by Schiffmann and Vasserot in the case of the Jordan quiver [SV13]. It is defined in terms of correspondences. Since the algebra defined this way is isomorphic [RS17, YZ18] to the critical CoHA introduced in §2.3 we refrain from giving this definition, instead referring the reader to [SV13, Sec.4] and [YZ18] for details. We define this algebra structure below, incorporating a sign twist as in §2.3.1, so that we can prove the PBW theorem for this algebra.
|
In this subsection we give a curious example, which will not be used later in the paper. It is an example of how deforming the potential can modify the BPS Lie algebra.
|
B
|
The distributions for JP=1−superscript𝐽𝑃superscript1J^{P}=1^{-}italic_J start_POSTSUPERSCRIPT italic_P end_POSTSUPERSCRIPT = 1 start_POSTSUPERSCRIPT - end_POSTSUPERSCRIPT are shown by the green lines.
|
The inset plots show distribution of the parameters β𝛽\betaitalic_β and ζ𝜁\zetaitalic_ζ for an ensemble of pseudoexperiments. The coloured lines indicate the true values corresponding to each hypothesis.
|
Far from the resonances the expected values of β𝛽\betaitalic_β and ζ𝜁\zetaitalic_ζ are determined for the continuum.
|
Bottom row: The angular distributions for β𝛽\betaitalic_β (left) and ζ𝜁\zetaitalic_ζ (right) for each scenario.
|
Moreover, interference effects modify the values of the angular asymmetries, β𝛽\betaitalic_β and ζ𝜁\zetaitalic_ζ.
|
A
|
We denote the total statistical operator of the problem as ρ^(tot)superscript^𝜌(tot)\hat{\rho}^{\text{(tot)}}over^ start_ARG italic_ρ end_ARG start_POSTSUPERSCRIPT (tot) end_POSTSUPERSCRIPT
|
the initial matter state ρ^0,subscript^𝜌0\hat{\rho}_{0},over^ start_ARG italic_ρ end_ARG start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT , and |⋅⟩ket⋅|\,\cdot\,\rangle| ⋅ ⟩
|
We first highlight that the total energy of the system, formed by the matter and the gravitational field, is conserved in the derived QFT model.
|
(the matter-wave system and the gravitational field), and by ρ^^𝜌\hat{\rho}over^ start_ARG italic_ρ end_ARG
|
system, but only on graviton and matter-wave frequencies, ω𝒌subscript𝜔𝒌\omega_{\bm{k}}italic_ω start_POSTSUBSCRIPT bold_italic_k end_POSTSUBSCRIPT
|
C
|
\}_{n\geq N}{ over∼ start_ARG roman_TL start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT end_ARG italic_H start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ( italic_X start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT , blackboard_k ) / [ italic_V start_POSTSUBSCRIPT italic_n , 1 end_POSTSUBSCRIPT ] , … , [ italic_V start_POSTSUBSCRIPT italic_n , italic_p - 1 end_POSTSUBSCRIPT ] } start_POSTSUBSCRIPT italic_n ≥ italic_N end_POSTSUBSCRIPT is a finitely generated LSLS\operatorname{LS}roman_LS-module.
|
The goal of this Subsection is to give a couple of examples of topological stability. This Section will likely not only be useful to a reader who is interested in topological stability, but will also be useful to a reader who wants to understand topological actions as in Section 3.
|
The definitions we provide assume that δ=1𝛿1\delta=1italic_δ = 1, and this is perhaps not a defect, but rather a feature of representation stability, at least from the viewpoint of actions on finite sets. With regard to topological actions, we are only interested in the δ=1𝛿1\delta=1italic_δ = 1 case and so we face no problems in this regard. One advantage of the notion of representation stability we provide is that it is naturally analogous to the definition of FIFI\operatorname{FI}roman_FI-modules, and hence our theorem that topological stability implies representation stability (Theorem 6.3) can be viewed as an analogue to the statement that the homology of configuration spaces is a finitely generated FIFI\operatorname{FI}roman_FI-module, as in [16] (Church, Ellenberg, Farb).
|
The goal of this Subsection is to make topological observations which are required to prove Theorem 5.16. We begin with a couple of simple but important observations.
|
Action of TLnsubscriptTL𝑛\operatorname{TL}_{n}roman_TL start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT on a topological space X𝑋Xitalic_X induces an action on each homology group Hk(X)subscript𝐻𝑘𝑋H_{k}(X)italic_H start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ( italic_X ). For the reader’s convenience, we will now state a homological version of Lemma 3.6 which will be useful later in the paper. The reader may feel free to skip over this for now and return to this at a later point.
|
A
|
The renormalisation scale, μrsubscript𝜇𝑟\mu_{r}italic_μ start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT, is set to be the same value as μfsubscript𝜇𝑓\mu_{f}italic_μ start_POSTSUBSCRIPT italic_f end_POSTSUBSCRIPT.
|
can reflect the relative magnitude of the cross sections for the hadroproduction of different states,
|
equivalently we set the values of the masses appeared in the cross sections in terms of the following approximation,
|
With the above parameter choice, we can compute the integrated cross sections for the states listed in Table 1 in the kinematic region,
|
In order to explore the relative magnitudes of the cross sections for the hadroproduction of different states,
|
C
|
B−3Lτ𝐵3subscript𝐿𝜏B-3L_{\tau}italic_B - 3 italic_L start_POSTSUBSCRIPT italic_τ end_POSTSUBSCRIPT
|
6.6×10−276.6superscript10276.6\times 10^{-27}6.6 × 10 start_POSTSUPERSCRIPT - 27 end_POSTSUPERSCRIPT
|
7.0×10−277.0superscript10277.0\times 10^{-27}7.0 × 10 start_POSTSUPERSCRIPT - 27 end_POSTSUPERSCRIPT
|
7.2×10−277.2superscript10277.2\times 10^{-27}7.2 × 10 start_POSTSUPERSCRIPT - 27 end_POSTSUPERSCRIPT
|
7.3×10−277.3superscript10277.3\times 10^{-27}7.3 × 10 start_POSTSUPERSCRIPT - 27 end_POSTSUPERSCRIPT
|
D
|
Here we present and prove a result that will be needed to demonstrate the excursion mimicry aspect of Theorem 7.2.
|
We now begin to implement the plan. The plan first claims, “low overlap entails a high cumulative duration for excursions”. However, though intuitive, this is not quite correct deterministically. In fact, a pair ϕitalic-ϕ\phiitalic_ϕ and ψ𝜓\psiitalic_ψ of n𝑛nitalic_n-zigzags from (0,0)00(0,0)( 0 , 0 ) and (0,1)01(0,1)( 0 , 1 ) exists with zero overlap, and with cumulative excursion duration equal to the tiny quantity n−1superscript𝑛1n^{-1}italic_n start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT. To see this consider two staircases between (0,0)00(0,0)( 0 , 0 ) and (n,n)𝑛𝑛(n,n)( italic_n , italic_n ), the first of which visits (n,0)𝑛0(n,0)( italic_n , 0 ), and the second of which visits (0,1)01(0,1)( 0 , 1 ) and (n,1)𝑛1(n,1)( italic_n , 1 ) and let ϕitalic-ϕ\phiitalic_ϕ and ψ𝜓\psiitalic_ψ be the corresponding n𝑛nitalic_n-zigzags obtained by applying Rnsubscript𝑅𝑛R_{n}italic_R start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT from Subsection 3.1.1.
|
Let ϕitalic-ϕ\phiitalic_ϕ be an n𝑛nitalic_n-zigzag from (0,0)00(0,0)( 0 , 0 ) to (0,1)01(0,1)( 0 , 1 ). Let the parameters κ∈(0,e−1)𝜅0superscript𝑒1\kappa\in(0,e^{-1})italic_κ ∈ ( 0 , italic_e start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT ) and R>0𝑅0R>0italic_R > 0 be given.
|
where the supremum is taken over all n𝑛nitalic_n-zigzags ψ𝜓\psiitalic_ψ from (0,0)00(0,0)( 0 , 0 ) to (0,1)01(0,1)( 0 , 1 ).
|
Let ϕitalic-ϕ\phiitalic_ϕ and ψ𝜓\psiitalic_ψ be n𝑛nitalic_n-zigzags between (0,0)00(0,0)( 0 , 0 ) and (0,1)01(0,1)( 0 , 1 ).
|
D
|
ζ𝜁\zetaitalic_ζ (or equivalently μ𝜇\muitalic_μ in the relation 1+ζ=1/μ1𝜁1𝜇1+\zeta=1/\mu1 + italic_ζ = 1 / italic_μ);
|
to one parameter t0=14.7subscript𝑡014.7t_{0}=14.7italic_t start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT = 14.7 Gy with χmin=1.1197subscript𝜒𝑚𝑖𝑛1.1197\chi_{min}=1.1197italic_χ start_POSTSUBSCRIPT italic_m italic_i italic_n end_POSTSUBSCRIPT = 1.1197. The long-dashed
|
VSL Formula (71) with t0=14.7subscript𝑡014.7t_{0}=14.7italic_t start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT = 14.7 Gy. Long-dashed
|
in Figure 5 for t0≳14.8greater-than-or-equivalent-tosubscript𝑡014.8t_{0}\gtrsim 14.8italic_t start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT ≳ 14.8,
|
Also note that this “optimal” value of t0=14.7subscript𝑡014.7t_{0}=14.7italic_t start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT = 14.7 Gy is not
|
B
|
In quantum metrology, the strong and collective atom-light interactions in cavity-QED systems exhibit prominent advantage in quantum-enhanced measurements.
|
and one can gain the sensitivity speeded up to attain the HL by a prefactor N2superscript𝑁2N^{2}italic_N start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT.
|
In summary, we study the time-reversal protocol to sense small displacements of the light field and corroborate the sensitivity of our scheme that can surpass the SQL and even attain the concrete HL.
|
Furthermore, we gain the sensitivities of the small displacements of the light field by choosing the optical part state as superposition of even and odd coherent state, and changing the atomic part state from the collective ground state to the superposed spin-coherent state in section 3 and section 4.
|
In this work, we study the time-reversal protocol to sense small displacements of the light field, and show the sensitivity of the scheme which could be speeded up to attain the HL.
|
D
|
Mainly owing to its conceptual simplicity, gravitational lensing has developed into one of the most informative and reliable methods of observational cosmology (Bartelmann &
|
We have investigated how the power spectrum Cℓγsuperscriptsubscript𝐶ℓ𝛾C_{\ell}^{\gamma}italic_C start_POSTSUBSCRIPT roman_ℓ end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_γ end_POSTSUPERSCRIPT of weak cosmological gravitational lensing changes with the expansion function E(a)𝐸𝑎E(a)italic_E ( italic_a ) of the cosmic background. We are interested in this change for two main reasons: First, in view of a possible time dependence of dark energy, it may be important to know at which redshifts Cℓγsuperscriptsubscript𝐶ℓ𝛾C_{\ell}^{\gamma}italic_C start_POSTSUBSCRIPT roman_ℓ end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_γ end_POSTSUPERSCRIPT is most sensitive to changes in the expansion function or, in other words, at which redshifts changes in the expansion function need to be for lensing to be most efficient in detecting them. Second, owing to a multitude of precise cosmological measurements, it has become possible to reconstruct the cosmic expansion function purely empirically, i.e. without reference to a specific cosmological model, with astonishing accuracy. Using this empirically determined expansion function for calculating the weak-lensing power spectrum, the remaining uncertainty of the expansion function will propagate into the power spectrum. In this context, it is interesting to see how accurately the weak-lensing power spectrum can be predicted based on a purely empirically determined expansion function.
|
The first term on the right-hand side reflects the variation of the density-fluctuation power spectrum Pδ(k,a)subscript𝑃𝛿𝑘𝑎P_{\delta}(k,a)italic_P start_POSTSUBSCRIPT italic_δ end_POSTSUBSCRIPT ( italic_k , italic_a ) in response to a change in the wave number k𝑘kitalic_k where it is to be evaluated, which is in turn due to a change in the comoving radial distance w(a)𝑤𝑎w(a)italic_w ( italic_a ) at fixed angular wave number ℓℓ\ellroman_ℓ. The function κ(k,a)𝜅𝑘𝑎\kappa(k,a)italic_κ ( italic_k , italic_a ) defined in (17) also takes the shape evolution of the non-linear power spectrum into account; see Fig. 1. The radial comoving distance w(a)𝑤𝑎w(a)italic_w ( italic_a ) itself responds to variations in the expansion function E(x)𝐸𝑥E(x)italic_E ( italic_x ) with x>a𝑥𝑎x>aitalic_x > italic_a, as given by (9). The second term on the right-hand side reflects the variation of Pδ(k,a)subscript𝑃𝛿𝑘𝑎P_{\delta}(k,a)italic_P start_POSTSUBSCRIPT italic_δ end_POSTSUBSCRIPT ( italic_k , italic_a ) with uncertainties in the time evolution of cosmic structures in response to changes in the expansion function E(x)𝐸𝑥E(x)italic_E ( italic_x ).
|
If the cosmic expansion function E(a)=H(a)/H0𝐸𝑎𝐻𝑎subscript𝐻0E(a)=H(a)/H_{0}italic_E ( italic_a ) = italic_H ( italic_a ) / italic_H start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT is varied in an arbitrary way, how does the power spectrum of cosmological weak lensing change? Here, H(a)𝐻𝑎H(a)italic_H ( italic_a ) is the Hubble function and H0subscript𝐻0H_{0}italic_H start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT its present value. We believe that this question is relevant for two main reasons: First, it is interesting to find out how sensitive the weak-lensing power spectrum Cℓγsuperscriptsubscript𝐶ℓ𝛾C_{\ell}^{\gamma}italic_C start_POSTSUBSCRIPT roman_ℓ end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_γ end_POSTSUPERSCRIPT is to changes of the expansion function as a function of redshift. Or, phrased differently, at what redshift is Cℓγsuperscriptsubscript𝐶ℓ𝛾C_{\ell}^{\gamma}italic_C start_POSTSUBSCRIPT roman_ℓ end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_γ end_POSTSUPERSCRIPT most or least sensitive to uncertainties in or modifications of E(a)𝐸𝑎E(a)italic_E ( italic_a )? This question is particularly important in view of a possible time dependence of the dark energy (Amendola
|
Schneider, 2001; Schneider, 2006; Bartelmann, 2010; Kilbinger, 2015; Mandelbaum, 2018). Expected weak-lensing power spectra depend on the cosmological background model in two ways: geometrically via the angular-diameter distances entering its geometrical weight function, and dynamically via the growth of density perturbations. In view of these dependences, we address in this paper the following question:
|
D
|
\right]}=t(U)\rho\,t(U)^{\dagger}divide start_ARG caligraphic_A ( italic_U ) ( italic_ρ ) end_ARG start_ARG roman_tr [ caligraphic_A ( italic_U ) ( italic_ρ ) ] end_ARG = italic_t ( italic_U ) italic_ρ italic_t ( italic_U ) start_POSTSUPERSCRIPT † end_POSTSUPERSCRIPT
|
The if-clause (m=1𝑚1m=1italic_m = 1) impossibility is immediate. In addition to the following full proof of Theorem 1, the appendix contains two more proofs for only the exact, ϵ=0italic-ϵ0\epsilon=0italic_ϵ = 0, impossibility. The “operational” proof in Appendix B reaches a contradiction by using the supposed cϕmsubscriptsuperscript𝑐𝑚italic-ϕc^{m}_{\phi}italic_c start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_ϕ end_POSTSUBSCRIPT algorithm as a building block in a larger circuit. The proof in Appendix C for only the m=1𝑚1m=1italic_m = 1 case 333We thank an anonymous referee from the QIP conference for observing that such a proof is possible. gives additional intuition: The special unitary group SU(d)𝑆𝑈𝑑SU(d)italic_S italic_U ( italic_d ) is a d𝑑ditalic_dth cover of PU(d)𝑃𝑈𝑑PU(d)italic_P italic_U ( italic_d ), the projective unitary group. This prevents the existence of a continuous map from a unitary superoperator to a matching operator, PU(d)→U(d)→𝑃𝑈𝑑𝑈𝑑PU(d)\to U(d)italic_P italic_U ( italic_d ) → italic_U ( italic_d ), preventing an exact if-clause algorithm. The full proof below holds for approximations and relies on 1 proven next.
|
The rest of the paper is organized as follows. Section II defines oracle computation using functions on d𝑑ditalic_d-dimensional unitaries, U∈U(d)𝑈𝑈𝑑U\in U(d)italic_U ∈ italic_U ( italic_d ). Section III proves the if-clause impossibility and the process tomography limitation by exploiting the continuity of algorithms and the topology of the space U(d)𝑈𝑑U(d)italic_U ( italic_d ) (1). Section IV uses this topological approach to prove results regarding the neutralization, 1/d1𝑑1/d1 / italic_dth power, transpose, and inverse. In Section V we emphasize that our method applies to the worst-case models with the exception of linear optics. We discuss the cause and the significance of this exception. Then we discuss relaxed causality and measurements.
|
The above results limit versatile quantum computation, and impact our understanding of tomography, measurements, linear optics and causality. Using process tomography for the if clause has a caveat: Instead of a superoperator estimate of ρ↦UρU†maps-to𝜌𝑈𝜌superscript𝑈†\rho\mapsto U\rho\,U^{\dagger}italic_ρ ↦ italic_U italic_ρ italic_U start_POSTSUPERSCRIPT † end_POSTSUPERSCRIPT, the if clause requires a matrix estimate of U𝑈Uitalic_U. We show the limitation of such matrix tomography. Defining a relaxed if clause circumvents the limitation, but the algorithm must use a measurement beyond the binary success/fail type. This splits measurements into two groups with different effects on the quantum-circuit query complexity. The quantum-circuit model itself is compared to other models: linear optics and process matrices. On the if clause the quantum-circuit model turns out to be infinitely less efficient than linear optics! One might put some hope into relaxing causality; maybe linear optics is better matched by process matrices. While arguably true for the quantum switch task, this is wrong for the more fundamental if-clause task: its process-matrix complexity is infinite, too. The advantage of linear optics stems from restricting the oracle. The models with fully general unitary oracles have a property central to our impossibility proofs: homogeneity. Linear optics restrict oracles to the form 1⊕Udirect-sum1𝑈1\oplus U1 ⊕ italic_U which breaks homogeneity. Differently from [36], we attribute the direct sum to the linearity of linear optics (see Section V).
|
One direction of the equivalence is immediate from (5), the other follows from Theorem 2.3 of [67]. The theorem also relates the errors in the operator and superoperator languages. Here we continue with superoperators.
|
D
|
The large database created by high-throughput DFT calculations forms the basis for a surrogate machine learning model that enables the prediction of the work function at a fraction of the computational cost. As a first step, we assess common models from the materials science machine learning community as a benchmark. For that, we employ the automatminer testing suite,[87] and a conventional Coulomb matrix (trained with a random forest model).[88] For automatminer we use the “express" setting and compare using the bulk unit cell and the topmost 5 atomic layers of the surface slabs as inputs. As a baseline model we predict the work function to be the average work function regardless of the surface. The automatminer model performs only marginally better than the baseline model when bulk structures are used as an input. When the surface slabs are used as inputs the performance increases and is comparable to the performance of the Coulomb matrix. The mean absolute errors (MAEs) are shown for the training and test sets in Figure 4a (cf. Figure S9 for RMSEs). The baseline MAE is 0.60 eV and the DFT accuracy is indicated in the green-shaded area between 0.022 and 0.1 eV, corresponding to the convergence error (see Methods) and the error between PBE-calculated and experimental work functions,[81] respectively.
|
The large database created by high-throughput DFT calculations forms the basis for a surrogate machine learning model that enables the prediction of the work function at a fraction of the computational cost. As a first step, we assess common models from the materials science machine learning community as a benchmark. For that, we employ the automatminer testing suite,[87] and a conventional Coulomb matrix (trained with a random forest model).[88] For automatminer we use the “express" setting and compare using the bulk unit cell and the topmost 5 atomic layers of the surface slabs as inputs. As a baseline model we predict the work function to be the average work function regardless of the surface. The automatminer model performs only marginally better than the baseline model when bulk structures are used as an input. When the surface slabs are used as inputs the performance increases and is comparable to the performance of the Coulomb matrix. The mean absolute errors (MAEs) are shown for the training and test sets in Figure 4a (cf. Figure S9 for RMSEs). The baseline MAE is 0.60 eV and the DFT accuracy is indicated in the green-shaded area between 0.022 and 0.1 eV, corresponding to the convergence error (see Methods) and the error between PBE-calculated and experimental work functions,[81] respectively.
|
It is not surprising that the model performance is poor when the bulk structure is used as an input as the database contains multiple surfaces of different work functions for any given bulk structure. While the performance of the benchmarking models improves when the surface slab is used as the input instead, the MAEs are still large and significant overfitting is observed. This is likely due to the fact, that the models cannot distinguish between top and bottom of the input slab (which in general are not symmetric) and the database consists of all unique terminations. In general, if one termination (located at the top surface) is labeled with the calculated work function, the same termination exists in another input structure at the bottom surface (whereas the calculated work function always refers to the top surface). Hence, the shortcomings of the automated benchmarking models does not come from the machine learning models used but rather the implementation of the featurization of surface slabs.
|
Some statistical analyses have been carried out in literature showing that the electronegativity is linearly correlated with the work function both for elemental crystals and binary compounds.[59, 65] Additionally, for elemental crystals an inverse correlation with the atomic radius is pointed out. The work function of elemental crystals ranges between 2 and 6 eV (for Cesium and Selenium, respectively). The statistical analyses of about 30 binary compounds shows that a correlation between the electronegativity of the atom with the lower electronegativity is the strongest (better than arithmetic or geometric mean of the individual electronegativities). Density functional theory has been a well-established approach (using a slab configuration) to calculate the work function, similar to the more simplistic Jellium model.[66] Also a phenemonolgical model has been developed that is able to estimate the work function fairly accurately for metals and alkaline-metal coated surfaces.[67] This phenomenological equation is a function of the atomic radius and the number of atomic sites per unit cell area. However, it relies on a single parameter (loosely related to the number of electrons that an atom can donate to the surface) that is not clearly defined for more complex surfaces and takes on nonphysical values in the case of alkaline coatings. In recent work, Hashimoto et al.[68] attempted to screen for low and high work function materials using a Bayesian optimization approach. However, they assume the work function to be approximated solely as a bulk property neglecting any surface contributions during screening. For the highest and lowest “bulk work function" material candidates the actual surface contributions have then been included which rendered most of their top candidate materials to exhibit average work functions between 3 and 6 eV. Unsurprisingly, among their top candidate materials, they have found that the (110) surface of elemental Cesium has a low work function of 2.0 eV and that the (111) surface of KEuO22{}_{2}start_FLOATSUBSCRIPT 2 end_FLOATSUBSCRIPT has a relatively high work function of 8.4 eV. The approximated bulk work function of some of the screened work function candidates differs as much as 7 eV from the actual work function when including the surface contributions. This clearly shows that, while for simple structures (such as elemental metals) the work function can theoretically be predicted from bulk properties alone,[69] it is important to consider surface contributions to quantitatively predict the work function of a material. The surface termination, atom adsorption (most commonly oxygen and hydrogen), contamination, and reconstructions can affect the surface dipole and hence the effective work function. While a crystal graph convolutional neural network has been used successfully to predict the cleavage energies of intermetallic slabs,[70] there has been no reports on featurizing slabs to predict the work function (except for the MXene 2D-material class[71]).
|
The observation that the distribution in work functions is near-Gaussian could indicate that the chemical space we chose was diverse enough to evenly sample work functions across possible values. The extended tail at the high work function end appears to be an artifact coming from ionically unrelaxed surfaces where a small, electronegative atom (e.g., oxygen or hydrogen) is cleaved at a large, unphysical distance (as discussed in the next section and corroborated by Figure S10). This might also be the case for the low work function tail but appears to be less pronounced. This artifact can be mitigated by ionically relaxing the surface slabs (see next section) and we expect this to result in an overall slightly narrower distribution. Interestingly, the work function distributions of binary and ternary compounds (and to a certain extent also the elemental crystals) have similar averages, standard deviations, and ranges. This may be explained by the observation that the work function is primarily determined by the chemical species present in the topmost layer at the surface (as discussed in the next paragraph), and will largely not depend on the total number of chemical species present in the entire unit cell. Moreover, the average work function of the database is lower than the average work function for the JARVIS database and C2DB (4.91 and 5.43 eV, respectively, cf. Table S1) while the standard deviations are somewhat similar (1.22 and 1.08 eV, respectively). The average cleavage energy of all asymmetric slabs (103.4 meV/Å2/\AA{}^{2}/ italic_Å start_FLOATSUPERSCRIPT 2 end_FLOATSUPERSCRIPT) is higher than the average for all symmetric slabs (88.0 meV/Å2/\AA{}^{2}/ italic_Å start_FLOATSUPERSCRIPT 2 end_FLOATSUPERSCRIPT). This is expected because this database is calculated for unrelaxed slabs and cleaving asymmetric slabs may lead to dangling atoms in nonphysical positions too far/close to the other surface atoms.
|
B
|
V𝑉Vitalic_V that intersects (𝒮0,ι0)subscript𝒮0subscript𝜄0\left(\mathcal{S}_{0},\iota_{0}\right)( caligraphic_S start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT , italic_ι start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT )
|
Let (𝒬,g,𝒪)𝒬𝑔𝒪\left(\mathcal{Q},g,\mathcal{O}\right)( caligraphic_Q , italic_g , caligraphic_O ) be a spacetime
|
Let (𝒬,g,𝒪)𝒬𝑔𝒪\left(\mathcal{Q},g,\mathcal{O}\right)( caligraphic_Q , italic_g , caligraphic_O ) be a spacetime,
|
Let (𝒬,g,𝒪)𝒬𝑔𝒪\left(\mathcal{Q},g,\mathcal{O}\right)( caligraphic_Q , italic_g , caligraphic_O ) be a spacetime.
|
Let (𝒬,g,𝒪)𝒬𝑔𝒪\left(\mathcal{Q},g,\mathcal{O}\right)( caligraphic_Q , italic_g , caligraphic_O ) be a spacetime.
|
A
|
(for any λ∈ℝ∖{0}𝜆ℝ0\lambda\in\mathbb{R}\setminus\{0\}italic_λ ∈ blackboard_R ∖ { 0 } and K>0𝐾0K>0italic_K > 0).
|
L2superscript𝐿2L^{2}italic_L start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT-cutoff) for the Benjamin-Ono equation (1.19) with k=3𝑘3k=3italic_k = 3.
|
and thus are incompatible with the Wick-ordered L2superscript𝐿2L^{2}italic_L start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT-cutoff.
|
focusing Gibbs measure with an L2superscript𝐿2L^{2}italic_L start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT-cutoff:
|
RNsubscript𝑅𝑁R_{N}italic_R start_POSTSUBSCRIPT italic_N end_POSTSUBSCRIPT is as in (1.8) with λ∈ℝ∖{0}𝜆ℝ0\lambda\in\mathbb{R}\setminus\{0\}italic_λ ∈ blackboard_R ∖ { 0 } and k=3𝑘3k=3italic_k = 3.
|
A
|
The observations of 2D superconductivity in the cuprates inspired the development of theories for pair-density wave (PDW) order Himeda et al. (2002); Berg et al. (2007); Agterberg et al. (2020) and more broadly the concept of intertwined orders in high-Tcsubscript𝑇𝑐T_{c}italic_T start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT superconductors Fradkin et al. (2015).
|
In conclusion, by combining neutron diffraction, muon spin rotation, and magnetization measurements in strong magnetic fields, we have found that La2-xSrxCuO4+y with x=0.06𝑥0.06x=0.06italic_x = 0.06 shows no magnetic order at low temperature, and that the application of a magnetic field induces stripe ordered regions. The volume of these regions is proportional to the applied field value, while the ordered magnetic moment is field-independent. These findings are in contrast to the interpretation of earlier, similar data on field-induced magnetism in oxygen-stoichiometric La2-xSrxCuO4 samples, where a field-induced enhancement of neutron diffraction intensity was interpreted to be caused by an increase in the ordered magnetic moment. Our results make it relevant to re-investigate with μ𝜇\muitalic_μSR whether the field-enhanced stripe signal in the other La-214 cuprates is due to an increase of the ordered moment, as previously concluded, or rather caused by an increased magnetic volume fraction. The answer to this is highly relevant for the understanding of the interplay between magnetism and SC in the cuprate superconductors.
|
The superconducting coherence length in this sample has been estimated via the WHH model to be in the range ξ=2.5−4.5𝜉2.54.5\xi=2.5-4.5italic_ξ = 2.5 - 4.5 nm, which is in agreement with other La2-xSrxCuO4 compounds, e.g. Ref. Wang and Wen, 2008. We find the lower limit of the magnetic correlation length to be significantly larger, ξAFM>14subscript𝜉AFM14\xi_{\rm AFM}>14italic_ξ start_POSTSUBSCRIPT roman_AFM end_POSTSUBSCRIPT > 14 nm. These findings suggest that the field-induced stripe order is correlated beyond the size of a vortex core, in agreement with e.g. Ref. Lake et al., 2002. Therefore, the simple picture that magnetism is found only inside the vortex core is inadequate, and further investigations of the overlap between the magnetic and SC phases are needed.
|
It is evident that there is a rich interplay between the 2D SC, the stripe order, both structural and magnetic, and 3D SC in these cuprate compounds; And that there is a need to investigate the different phases in the cuprates in order to understand the competition, interplay, and phases separation of the different states of matter.
|
In panel (d) the rotation frequency of the muons in the non-magnetic regions is seen to be constant at high temperature, with a value that corresponds to the external magnetic field. The small negative shift of ωSCsubscript𝜔SC\omega_{\text{SC}}italic_ω start_POSTSUBSCRIPT SC end_POSTSUBSCRIPT below 38 K together with the increase of σSCsubscript𝜎SC\sigma_{\text{SC}}italic_σ start_POSTSUBSCRIPT SC end_POSTSUBSCRIPT appearing below Tcsubscript𝑇cT_{\rm c}italic_T start_POSTSUBSCRIPT roman_c end_POSTSUBSCRIPT is typical for SC and can be understood in terms of a broadening of the field distribution due to the formation of a flux line lattice within the SC state that forms below Tcsubscript𝑇cT_{\rm c}italic_T start_POSTSUBSCRIPT roman_c end_POSTSUBSCRIPT Blundell (1999). We thus ascribe this non-magnetic component at low temperatures to muons stopping in SC regions of the sample, as done in Refs. Mohottala et al., 2006; Udby et al., 2013. There is no evidence of a third component that is simultaneously non-SC and non-magnetic.
|
C
|
}](\lambda_{lmkn}^{2}+36am\omega_{mkn}-36a^{2}\omega_{mkn}^{2})[ ( italic_λ start_POSTSUBSCRIPT italic_l italic_m italic_k italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT + 2 ) start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT + 4 italic_a italic_m italic_ω start_POSTSUBSCRIPT italic_m italic_k italic_n end_POSTSUBSCRIPT - 4 italic_a start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT italic_ω start_POSTSUBSCRIPT italic_m italic_k italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ] ( italic_λ start_POSTSUBSCRIPT italic_l italic_m italic_k italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT + 36 italic_a italic_m italic_ω start_POSTSUBSCRIPT italic_m italic_k italic_n end_POSTSUBSCRIPT - 36 italic_a start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT italic_ω start_POSTSUBSCRIPT italic_m italic_k italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT )
|
(2λlmkn+3)(96a2ωmkn2−48amωmkn)+144ωmkn2(M2−a2),2subscript𝜆𝑙𝑚𝑘𝑛396superscript𝑎2superscriptsubscript𝜔𝑚𝑘𝑛248𝑎𝑚subscript𝜔𝑚𝑘𝑛144superscriptsubscript𝜔𝑚𝑘𝑛2superscript𝑀2superscript𝑎2\displaystyle(2\lambda_{lmkn}+3)(96a^{2}\omega_{mkn}^{2}-48am\omega_{mkn})+144%
|
∑lmkn|Zlmkn∞|24πωmkn2,(dEdt)H=∑lmknαlmkn|ZlmknH|24πωmkn2,subscript𝑙𝑚𝑘𝑛superscriptsubscriptsuperscript𝑍𝑙𝑚𝑘𝑛24𝜋superscriptsubscript𝜔𝑚𝑘𝑛2superscript𝑑𝐸𝑑𝑡Hsubscript𝑙𝑚𝑘𝑛subscript𝛼𝑙𝑚𝑘𝑛superscriptsubscriptsuperscript𝑍H𝑙𝑚𝑘𝑛24𝜋superscriptsubscript𝜔𝑚𝑘𝑛2\displaystyle\sum_{lmkn}\frac{|Z^{\infty}_{lmkn}|^{2}}{4\pi\omega_{mkn}^{2}}\;%
|
256(2Mr+)5(ωmkn−mΩH)[(ωmkn−mΩH)2+4ϵ2][(ωmkn−mΩH)2+16ϵ2]ωmkn3|Clmkn|2.256superscript2𝑀subscript𝑟5subscript𝜔𝑚𝑘𝑛𝑚subscriptΩHdelimited-[]superscriptsubscript𝜔𝑚𝑘𝑛𝑚subscriptΩH24superscriptitalic-ϵ2delimited-[]superscriptsubscript𝜔𝑚𝑘𝑛𝑚subscriptΩH216superscriptitalic-ϵ2superscriptsubscript𝜔𝑚𝑘𝑛3superscriptsubscript𝐶𝑙𝑚𝑘𝑛2\displaystyle\frac{256(2Mr_{+})^{5}(\omega_{mkn}-m\Omega_{\rm H})[(\omega_{mkn%
|
[(λlmkn2+2)2+4amωmkn−4a2ωmkn2](λlmkn2+36amωmkn−36a2ωmkn2)delimited-[]superscriptsuperscriptsubscript𝜆𝑙𝑚𝑘𝑛2224𝑎𝑚subscript𝜔𝑚𝑘𝑛4superscript𝑎2superscriptsubscript𝜔𝑚𝑘𝑛2superscriptsubscript𝜆𝑙𝑚𝑘𝑛236𝑎𝑚subscript𝜔𝑚𝑘𝑛36superscript𝑎2superscriptsubscript𝜔𝑚𝑘𝑛2\displaystyle[(\lambda_{lmkn}^{2}+2)^{2}+4am\omega_{mkn}-4a^{2}\omega_{mkn}^{2%
|
A
|
Fig. 3(a, b) present the modulation depth as functions of the seed laser power and electron beam current. For nominal HGHG, the modulation depth is related to the seed laser intensity, but not to the electron beam current. For DEHG, however, the modulation depth is correlated with both the seed laser power and the electron beam current. This phenomenon makes the radiation of DEHG more tolerant to the variation of current. The reason is that although an increase of beam current will enhance the radiation power, the coexisting increase in modulation depth and the fixed dispersion strength result in the over-compressed and a smaller bunching factor, which is not conducive to the radiation power growth. This feedback mechanism makes radiation of DEHG not naturally sensitive to the beam current. Fig. 3(c, d) display the radiation power versus seed laser power and electron beam current in nominal HGHG and DEHG, respectively. One can see that in nominal HGHG, the radiated power increases as the electron beam current rises. In DEHG, however, the radiation power does not increase with current in a certain area due to the feedback mechanism. Power stable area can be regarded as a region in which the radiation power decreases by no more than 5% due to jitters in the electron beam current and seed laser power. In this case, the power stable area of DEHG is 28% larger than that of nominal HGHG. Further simulations reveal that this quantity becomes larger when the frequency up-conversion amplitude is lower. When the radiation frequency is at 9th of the seed laser, the power stable area of the DEHG is twice as large as that of the nominal HGHG. These results indicate the great stability of DEHG.
|
Radiation at 6 nm was simulated to illustrate the capability of the proposed technique to generate soft X-ray pulses. In this case, the peak power of the seed laser is 0.2 MW, indicating average power of 70 mW under 350 fs pulse duration (FWHM) and 1 MHz repetition rate. Parameters of the electron beam and undulators are all the same with those mentioned above. After the first modulator, the laser power would be amplified to 10 MW and the electron beam would get 5.2 times of the energy spread at the same time. Amplified seed laser is then led to the laser transport line while the electron beam traverses the chicane. The power profile of the seed laser at different positions (i.e., at entrance of the 1st modulator, at exit of the 1st modulator and at entrance of the 2nd modulator) are illustrated in Fig. 4(a),
|
The long upstream modulator (two undulator segments) is used for seeding amplification and electron modulation, and the energy-modulated electron beam together with the amplified seed laser are then guided to the downstream elements for further beam manipulation and high harmonic generation through HGHG or EEHG processes. In the modulator, an externally coherent seed laser with a peak power larger than shotnoise is directly injected to interact with the electron beam. The long-distance modulation process can ensure sufficient energy exchange between the electron beam and the seed laser, as portrayed in Fig. 1(a)(I-III). In the initial stage of modulation, the power of the seed laser grows hardly during the laser-electron interaction process while a weak energy modulation is imprinted onto the electron beam. In the latter part, the lethargy regime of seeded FEL is overcome and there is an intense energy exchange between the laser and the electron beam. At this time, the electric field increases rapidly and produces a significant sinusoidal modulation on the electron beam. Although the power of the laser enters the exponential gain regime, it is far from saturation and the rotation of the phase space is almost not perceptible, which ensures that the energy modulation of the electron beam maintains sinusoidal. This laser-electron interaction mechanism realizes the direct amplification of seed laser and effectively sinusoidal energy modulation of the electron beam.
|
where the beam current profile is also presented as a reference coordinate. One can see clearly the enhancement of the seed laser intensity through the 1st modulator and the inheritance of the power profile to the electron beam current. The laser transport line introduces a 30 fs time delay of the laser to the electron beam without obviously decreasing the laser power. The introduced time delay can effectively counteract the slippage time in the second modulator to obtain a uniform energy modulation of the electron beam. In the second modulator, the amplified seed laser induces 2.9 times of the energy spread on the electron beam. By now, the two-stage modulation of the electron beam is accomplished by a unique external coherent laser source. For the nominal EEHG with one-Rayleigh-length modulators, however, the total required peak power of two seed lasers to obtain the same energy modulation is 170 MW, which is about three orders of magnitude higher than the value used here.
|
The above simulation results demonstrate that the proposed technique is capable of generating stable, nearly full coherent and MHz-level repetition-rate EUV radiation. Nineteenth harmonic generation is almost the limit of HGHG with the above parameters. Shorter wavelengths are no longer at the scope of a single-stage HGHG and it can be achieved through EEHG with a similar setup of the first modulator based on the direct-amplification technique, as shown in Fig. 1(b). Like the nominal EEHG, two modulators, two chicanes and one radiator are arranged in the layout. The structural differences lay in the increased length of the first modulator, the added laser transport line between the two modulators, and the absence of the second external seed laser. The first modulator which consists of two undulator segments enables laser amplification and electron modulation. The amplified seed laser is then forwarded to the laser transport line to get appropriate time delay (∼similar-to\sim∼30 fs), in order to interact with the electron beam again in the subsequent modulator. The laser transport line mentioned here is an optical system that uses lenses and mirrors to focus and transmit the optical field with minimal power loss. With commercially available high transmittance lenses and high reflectivity mirrors, power loss can be controlled within 5%. After the twice-modulated electron beam passes through the second chicane, a highly micro-bunched beam distribution is formed. In comparison to the nominal EEHG, whose pulse energy fluctuations largely inherit the time jitter of the second seed laser, DEHG with a naturally synchronized second seed laser is very favorable for generating stable radiation.
|
D
|
\operatorname{QAC}}(M,\phi,-a)roman_WH start_POSTSUPERSCRIPT italic_q end_POSTSUPERSCRIPT start_POSTSUBSCRIPT roman_QAC end_POSTSUBSCRIPT ( italic_M , italic_ϕ , italic_a ) → italic_L start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT caligraphic_H start_POSTSUPERSCRIPT italic_q end_POSTSUPERSCRIPT ( italic_M ∖ ∂ italic_M , italic_g start_POSTSUBSCRIPT roman_QAC end_POSTSUBSCRIPT ) → roman_WH start_POSTSUPERSCRIPT italic_q end_POSTSUPERSCRIPT start_POSTSUBSCRIPT roman_QAC end_POSTSUBSCRIPT ( italic_M , italic_ϕ , - italic_a )
|
for all i𝑖iitalic_i allows us to take a=(0,…,0)𝑎0…0a=(0,\ldots,0)italic_a = ( 0 , … , 0 ) in Theorem 3.59, Corollary 3.10 and Corollary 3.11, giving in particular the identification
|
Finite dimensionality and Poincaré duality is a consequence of Proposition 5.3, Corollary 3.10 and Corollary 3.11.
|
Corollary 4.14 and Proposition 4.4 can then be combined to give a proof of the Vafa-Witten conjecture [43].
|
When q=m2=4𝑞𝑚24q=\frac{m}{2}=4italic_q = divide start_ARG italic_m end_ARG start_ARG 2 end_ARG = 4, Poincaré duality follows from Corollary 3.11 and the fact that in middle degree
|
B
|
Black hole thermodynamics is one of the most interesting topics in General Relativity. The history of the subject goes back to when Bekenstein proposed that the area of the black hole is proportional to its entropy Bekenstein:1973ur , followed by Hawking’s discovery that black holes radiate Hawking:1974sw . Since then, various thermodynamic properties have been attributed to black holes, such as internal energy, entropy, heat capacity, enthalpy, etc.
|
A little bit more than twenty years after such studies in Refs. Bekenstein:1973ur ; Hawking:1974sw the iconic work done by Witten, in Ref. Witten:1998zw , by using the new-found AdS/CFT correspondence, as proposed in Ref. Maldacena:1997re , relates the Hawking temperature achieved in a curved high-dimensional spacetime to the temperature of a super conformal Yang-Mills theory in a flat four-dimensional spacetime. Soon after Witten’s work the authors in Refs. Chamblin:1999tk ; Chamblin:1999hg studied the thermodynamic associated to a charged AdS black holes in the holographic context, and then opening up a multitude of possibilities to connect string and gauge theories through various types of black hole and its thermodynamics. It is worthwhile to mention that within holography the thermodynamics quantities are derived from the holographic renormalization of the on-shell euclidian action or the thermodynamics potentials.
|
Black hole thermodynamics is one of the most interesting topics in General Relativity. The history of the subject goes back to when Bekenstein proposed that the area of the black hole is proportional to its entropy Bekenstein:1973ur , followed by Hawking’s discovery that black holes radiate Hawking:1974sw . Since then, various thermodynamic properties have been attributed to black holes, such as internal energy, entropy, heat capacity, enthalpy, etc.
|
Quantum corrections are relevant in the phenomenology of microscopic black holes. Its effects in a static black hole have been studied by Kazakov and Solodukhin Kazakov:1993ha , where the authors considered small deformations in the Schwarzschild metric due to the quantum fluctuations in the gravitational and matter fields. The effect of quantum corrections in black hole thermodynamics and phase transitions were studied in Ref. Shahjalal:2019pqb ; Lobo:2019put .
|
Within general relativity context, some authors have proposed to associate mechanical pressure to black holes. For doing this, they considered that the cosmological constant is a thermodynamic variable that can be associated with the black hole pressure as shown in Refs. Kastor:2009wy ; Kubiznak:2012wp . In this way, applying the first law of thermodynamics, it is possible to deduce what would be the thermodynamic volume of a black hole, which is different from its physical volume. The existence of mechanical pressure in black holes allows us a better analogy with usual mechanical systems, such as thermal machines, where phase transitions occur naturally. For example, Johnson, in Ref. Johnson:2014yja , calculates the efficiency rate of a black hole, considering it to be a thermal machine in a Carnot cycle. Recently, another interesting example is presented by Ökcü and Aydıner in Ref. Okcu:2016tgt where they have studied a process similar to the Joule-Thomson process, in a black hole, and were able to calculate their temperature inversion curves.
|
A
|
We show that there exists a simple relation between the energy dissipated in the local environment due to the work of the demon and the violation of classical local correlation.
|
Our results provide a new approach to exploring and better understanding of relationships between quantum non-locality, information theory, and thermodynamics.
|
Quantum entanglement is a fundamental characteristic of quantum theory and plays an important role as a resource in quantum information tasks nonlocality1 ; nonlocality2 ; nonlocality3 .
|
Maxwell demon was first proposed by James Clark Maxwell in 1867 to demonstrate that the second law of thermodynamics is statistical rather than based on dynamical laws such as those of Newton demon . The Maxwell demon paradox was completely resolved by Landauer in 1961 when he introduced the concept of logical irreversibility for the process of memory erasure Landuer . Landauer’s erasure principle states that information erasure is a logically irreversible process in which energy dissipation must be involved, thus causing an entropy increase in the environment. Due to its important role in revealing connections between thermodynamics and information theory, the Maxwell demon has now been widely investigated in quantum thermodynamics thermo and quantum information theory information1 ; information2 .
|
In conclusion, we have addressed the issue of simulating quantum non-locality through work. In the task of EPR steering, the Maxwell demon can be introduced in collaboration with Alice to deceive Bob using only local operations and classical communication. The existence of Maxwell demon-assisted EPR steering implies a new-type loophole, i.e., Maxwell demon loophole, which can only be closed by carefully monitoring heat fluctuations in the local environment by the participant. To give a quantitative relationship between quantum non-locality correlation in EPR steering and the work done by the demon, we construct a quantum circuit model of Maxwell demon-assisted EPR steering, which can be demonstrated in current quantum processors.
|
A
|
In particular, we would like to understand the prediction of the quadrupole approximation, namely that the rate of gravitational energy loss along ℐ+superscriptℐ\mathcal{I}^{+}caligraphic_I start_POSTSUPERSCRIPT + end_POSTSUPERSCRIPT is given by −1/|u|41superscript𝑢4-1/|u|^{4}- 1 / | italic_u | start_POSTSUPERSCRIPT 4 end_POSTSUPERSCRIPT as u→−∞→𝑢u\to-\inftyitalic_u → - ∞, dynamically, i.e. arising from suitable scattering data, rather than imposing it on ℐ+superscriptℐ\mathcal{I}^{+}caligraphic_I start_POSTSUPERSCRIPT + end_POSTSUPERSCRIPT as was done in [Chr02].
|
It would also be interesting to find a definitive answer to the question whether or not the rate (1.42) can be improved without assuming additional regularity.
|
We have thus established the uniform convergence of the sequence {ϕ1(k)}superscriptsubscriptitalic-ϕ1𝑘\{\phi_{1}^{(k)}\}{ italic_ϕ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ( italic_k ) end_POSTSUPERSCRIPT }. In view of the uniformity of the convergence, the bounds from Propositions 5.1 and 5.5 carry over to the limiting solution, thus proving the estimates (5.59)–(5.61). Moreover, the methods of the proof show that this is the unique solution that has vanishing energy flux on ℐ−superscriptℐ\mathcal{I}^{-}caligraphic_I start_POSTSUPERSCRIPT - end_POSTSUPERSCRIPT and satisfies the assumptions of §5.2.1. This concludes the proof.
|
It may be instructive for the reader to keep the following solution to (3.11) in the case M=0𝑀0M=0italic_M = 0 in mind:
|
In view of the multipole structure of gravitational radiation, it thus seems to be necessary to first understand the answer to the following question:
|
D
|
As a consequence, the action of SLOCC operators on the states |ψz⟩ketsubscript𝜓𝑧\ket{\psi_{z}}| start_ARG italic_ψ start_POSTSUBSCRIPT italic_z end_POSTSUBSCRIPT end_ARG ⟩ would no longer be given by the corresponding Möbius transformation, and the statements in Theorem 1 would no longer hold.
|
Consider any SLIPnhsuperscriptsubscriptSLIP𝑛ℎ\text{SLIP}_{n}^{h}SLIP start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_h end_POSTSUPERSCRIPT measure and two (n+1)𝑛1(n+1)( italic_n + 1 )-qubit states. If both states have at least 3333 roots with respect to each subsystem, they are SLOCC-equivalent iff they are interconnected by one of the operator obtained as an outcome of Procedure 1.
|
The decomposition (4) can be performed with respect to any other subsystem, each with its own system of roots. Any local operator 𝒪k=(abcd)subscript𝒪𝑘matrix𝑎𝑏𝑐𝑑\mathcal{O}_{k}=\begin{pmatrix}a&b\\
|
This system of four points can be mapped into a normal system (i.e. symmetrically related points z,−z,1/z,−1/z𝑧𝑧1𝑧1𝑧z,-z,1/z,-1/zitalic_z , - italic_z , 1 / italic_z , - 1 / italic_z) by a Möbius transformation. Similar local transformations can be performed with respect to other subsystems, transforming the states into a state in the normal form.
|
To study the effect of SLOCC operations on the system of roots we begin by acting on the first qubit of a state |ψ⟩ket𝜓\ket{\psi}| start_ARG italic_ψ end_ARG ⟩ written in the form of Eq. (4) with an invertible linear operator
|
B
|
H. Gharibyan, C. Pattison, S. Shenker111Private communication via Stephen Shenker and Sourav Chatterjee in June 2020. and K. Wells who coined it as
|
so that for all H∈Ωi𝐻subscriptΩ𝑖H\in\Omega_{i}italic_H ∈ roman_Ω start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT the statistics of the i𝑖iitalic_i-th rescaled gap of the eigenvalues λixsuperscriptsubscript𝜆𝑖𝑥\lambda_{i}^{x}italic_λ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_x end_POSTSUPERSCRIPT of Hxsuperscript𝐻𝑥H^{x}italic_H start_POSTSUPERSCRIPT italic_x end_POSTSUPERSCRIPT is universal, i.e.
|
The basic guiding principle for establishing quenched universality of Hxsuperscript𝐻𝑥H^{x}italic_H start_POSTSUPERSCRIPT italic_x end_POSTSUPERSCRIPT is to show that
|
The main universality result for the first mechanism (eigenbasis rotation) is the following quenched
|
Thus the main task is to show that eigenvectors of Hxsuperscript𝐻𝑥H^{x}italic_H start_POSTSUPERSCRIPT italic_x end_POSTSUPERSCRIPT become asymptotically orthogonal for different, sufficiently distant values of x𝑥xitalic_x.
|
B
|
Fourth, the continuum assumption will break down near the wave front where the population is low. For the standard FKPP equation the resulting stochasticity introduces a speed reduction ∝ln−2(N)proportional-toabsentsuperscript2𝑁\propto\ln^{-2}(N)∝ roman_ln start_POSTSUPERSCRIPT - 2 end_POSTSUPERSCRIPT ( italic_N ), with N𝑁Nitalic_N the approximate number of particles in the wave front [38]. In our model the bacterial population grows exponentially so one might expect this correction to vanish with time. However, the phage wave retains a constant population at the front, and as it is this wave which determines both wave speeds through the steepness λ−subscript𝜆\lambda_{-}italic_λ start_POSTSUBSCRIPT - end_POSTSUBSCRIPT, it seems probable that some stochastic correction will remain.
|
Our model cannot be tested in the usual environment of an agar plate: for such systems, where the phage are mobile rather than the bacteria, we predict that the asymptotic wave speed in populations of growing bacteria will vanish. A more suitable experimental setup for testing our theoretical predictions would therefore be a fluid-filled channel containing a suspension of swimming bacteria into which phage are inserted at one end. The various wave speeds and shapes would then be accessible via microscopy or light scattering. Experimental parameters could be controlled, e.g., through the nutritional quality [20] or viscosity [46] of the medium. Since we have mainly been interested in the long-time asymptotics, a natural concern is whether experiments will approach the asymptotic behaviour before the bacteria run out of nutrients. This section will answer this question through numerical calculations based on bacteriophage with a realistic lysis-time distribution, and other parameters chosen to match experimental data in the literature.
|
We focused here on the asymptotic wave speeds obtainable theoretically. Throughout, these wave speeds matched the predictions of FKPP theory, implying that these are pulled waves, i.e., driven by the infection dynamics in the very tip of the wave. This contrasts with recent work on bacteriophage plaques [29] where some conditions exhibited pushed waves, which are faster waves driven by growth in the body of the wave. It will be interesting to explore whether this absence of pushed waves is a generic feature of the type of model studied here, where the virus spreads principally through bacterial motility. It would also be interesting to explore the impact of our results on genetic diversity. However, we might expect this effect to be small: in general, only the small population at the front of a pulled wave [50, 51, 52] is able to contribute to genetic evolution, and the size of this front population will be governed by the decay length of the spreading wave, which itself is not predicted to be significantly affected by bacterial growth in realistic experimental conditions.
|
Our main result is that the infected and uninfected bacteria form self-similar travelling waves, which retreat before the expanding phage front and which grow exponentially in time. The phage also form a self-similar front, which does not grow exponentially, but this is only in the case where superinfection (where a single bacterium can be simultaneously infected by multiple phage) is permitted; without superinfection the phage wave also grows and changes shape as it develops. The speeds of these various waves depend on the species tracked (bacteria or phage) and on whether the front or peak of the wave is tracked: the viral wave is retarded, while the wavefront of infected bacteria is advanced, compared to the case without bacterial growth. The advanced speed of the infected bacterial wave does not stem from the initial conditions, as is usual in FKPP theory, but is instead controlled dynamically by the shape of the phage wavefront in a novel selection mechanism. Interestingly, the varying wave speed also causes a non-monotonic variation in the width of the infectious wave, which is narrowest at intermediate growth rates.
|
Here, we will study the impact of exponential bacterial growth on the spread of bacteriophage infections. We will focus on the asymptotic wave speed of the infection and, as in ref. [30], allow for bacterial and bacteriophage mobility. In this paper, we want to stress the more mathematical and general aspects of this theory, so in section II-section IV we will keep the model as simple as possible, suppressing certain aspects of bacterial and bacteriophage behaviour. Nevertheless, we hope that our results will inspire experimental investigations into the impact of growth on infection speeds, so in section V-section VI we will consider more general formulations of our model, which take into account features such as realistic distributions of the bacteriophage lysis time. We will also suggest a concrete experimental realisation to test our model, consisting of bacteriophages spreading through a thin, fluid-filled channel containing a population of growing bacteria.
|
A
|
Robustness of quantum advantage, ΔΔ\Deltaroman_Δ, (ordinate) with the variation of noise strength, p𝑝pitalic_p (abscissa). In Gaussian noise, σ1=σ2=1subscript𝜎1subscript𝜎21\sigma_{1}=\sigma_{2}=1italic_σ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT = italic_σ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT = 1. In all cases, the number of photons added or subtracted is set to be 5555. We fix x=0.2(≈4dB)𝑥annotated0.2absent4𝑑𝐵x=0.2~{}(\approx 4dB)italic_x = 0.2 ( ≈ 4 italic_d italic_B ) in the lower panel and x=0.05(≈2dB)𝑥annotated0.05absent2𝑑𝐵x=0.05~{}(\approx 2dB)italic_x = 0.05 ( ≈ 2 italic_d italic_B ) in the upper panel. All other notations are the same as in Fig. 4. Both axes are dimensionless.
|
The paper is organized in the following way. In Sec. 2, we provide the prerequisites which include the Chernoff bound (the upper bound on the efficiency of the illumination protocol), its classical limit, and the non-Gaussian states together with the noise models which we will use in our calculations. This is followed by Sec. 3 where we elucidate the advantages offered by non-Gaussian states, with a particular focus on the comparison between the single-mode addition and subtraction of photons and the two-mode operations. We then move on to the definition of quantum advantage and show how only certain non-Gaussian states can actually outperform the classical protocol, while others fail to do so. In Sec. 4, we introduce noise in probe states, modeled by Gaussian local noise and faulty twin beam generators, and establish the robustness exhibited by non-Gaussian states to various noise models while in Sec. 5, we compare Gaussian TMSV states with non-Gaussian states in two ways – one when non-Gaussian apparatus is inefficient and another via the correlation content of the states.
|
In any experimental implementation, noise is inevitable, and in our work, the effects of different noisy probe states generated via different imperfections on the illumination procedure are investigated. Considering local noise modeled by Gaussian distributions, we found that, unlike a noiseless scenario, if the signal transmission line equally affects both the non-Gaussian and coherent states having equal signal strength, all non-Gaussian states give a quantum advantage. Specifically, in the presence of certain critical noise values, benefits via non-Gaussian states increase with the increase of noise.
|
Instead of comparing the performance of noisy non-Gaussian states with the optimal classical scheme by coherent states,
|
We compare now the noisy non-Gaussian states with the corresponding noisy coherent state, i.e., noise affects both non-Gaussian and coherent states in a similar fashion, so that
|
D
|
=π∧ω+du∧πabsent𝜋𝜔𝑑𝑢𝜋\displaystyle=\pi\wedge\omega+du\wedge\pi= italic_π ∧ italic_ω + italic_d italic_u ∧ italic_π
|
=dB(x)−B(x)dα+Aωabsent𝑑𝐵𝑥𝐵𝑥𝑑𝛼𝐴𝜔\displaystyle=dB(x)-B(x)d\alpha+A\omega= italic_d italic_B ( italic_x ) - italic_B ( italic_x ) italic_d italic_α + italic_A italic_ω
|
Now if we write Ω=ω−duΩ𝜔𝑑𝑢\Omega=\omega-duroman_Ω = italic_ω - italic_d italic_u and note that dΩ=dω𝑑Ω𝑑𝜔d\Omega=d\omegaitalic_d roman_Ω = italic_d italic_ω, we have the equation
|
=π∧ω+du∧πabsent𝜋𝜔𝑑𝑢𝜋\displaystyle=\pi\wedge\omega+du\wedge\pi= italic_π ∧ italic_ω + italic_d italic_u ∧ italic_π
|
=π∧(ω−du).absent𝜋𝜔𝑑𝑢\displaystyle=\pi\wedge(\omega-du).= italic_π ∧ ( italic_ω - italic_d italic_u ) .
|
D
|
\sigma_{m}\geq 0\}=C_{i}.italic_C start_POSTSUPERSCRIPT italic_d end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT = { ∑ start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT italic_a start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ⊗ italic_ϱ start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ; ∑ start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT ∑ start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT roman_Tr italic_a start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT italic_σ start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT ⋅ roman_Tr italic_b start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT italic_ϱ start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ≥ 0 , italic_f italic_o italic_r italic_a italic_l italic_l italic_b start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT ≥ 0 , italic_σ start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT ≥ 0 } = italic_C start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT .
|
this claim is dual to the equivalence, in two dimensional case, of the tensor cone Cdsubscript𝐶𝑑C_{d}italic_C start_POSTSUBSCRIPT italic_d end_POSTSUBSCRIPT determining decomposable maps with the tensor cone determining positive maps Cpsubscript𝐶𝑝C_{p}italic_C start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT.
|
The question of existence of product vectors in a subspace of the tensor product of two Hilbert spaces can be formulated in terms of algebraic geometry. In particular, properties of projective spaces as well as the Segre variety appeared to be crucial. For more information the reader may consult [14]. The following Proposition stated in [15] is a special case of basic theorem given in [14].
|
As tensor cones are defined for the projective tensor product, the above tensor products ⊗tensor-product\otimes⊗ denote the projective tensor product ⊗πsubscripttensor-product𝜋\otimes_{\pi}⊗ start_POSTSUBSCRIPT italic_π end_POSTSUBSCRIPT.
|
super positive maps; they are determined by the largest tensor cone - the injective tensor cone Cisubscript𝐶𝑖C_{i}italic_C start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT.
|
C
|
DM is free to propagate within the star and capture can, in principle, take place anywhere in the stellar interior. However, only a fraction of the DM flux traversing the star is effectively captured.
|
Figure 8: Capture rate in the optically thin limit for operators D1-D4 as a function of the DM mass mχsubscript𝑚𝜒m_{\chi}italic_m start_POSTSUBSCRIPT italic_χ end_POSTSUBSCRIPT for nucleons and exotic targets in the NS benchmark configuration QMC-4 (1.9M⊙1.9subscript𝑀direct-product1.9M_{\odot}1.9 italic_M start_POSTSUBSCRIPT ⊙ end_POSTSUBSCRIPT). All capture rates were calculated using the complete approach that accounts for strong interactions and momentum dependent form factors for baryons. Note that these rates scale as Λ−4superscriptΛ4\Lambda^{-4}roman_Λ start_POSTSUPERSCRIPT - 4 end_POSTSUPERSCRIPT. The lower panels show the contribution of each baryonic species to the total capture rate associated with DM interactions with baryons (dashed blue line).
|
Below, we derive general expressions for the capture rate in the optically thin limit, for various DM mass regimes, correctly incorporating the effects of baryon structure and strong interactions.
|
Figure 9: Capture rate in the optically thin limit for operators D5-D10 as a function of the DM mass mχsubscript𝑚𝜒m_{\chi}italic_m start_POSTSUBSCRIPT italic_χ end_POSTSUBSCRIPT for nucleons and exotic targets in the NS benchmark configuration QMC-4 (1.9M⊙1.9subscript𝑀direct-product1.9M_{\odot}1.9 italic_M start_POSTSUBSCRIPT ⊙ end_POSTSUBSCRIPT). All capture rates were calculated using the complete approach that accounts for strong interactions and momentum dependent form factors for baryons. Note that these rates scale as Λ−4superscriptΛ4\Lambda^{-4}roman_Λ start_POSTSUPERSCRIPT - 4 end_POSTSUPERSCRIPT.
|
Capture rate in the optically thin limit for the operators D5 (top) and D8 (bottom) as a function of the DM mass mχsubscript𝑚𝜒m_{\chi}italic_m start_POSTSUBSCRIPT italic_χ end_POSTSUBSCRIPT for neutron (left) and proton (right) targets,
|
B
|
(1-x_{b})(x_{b}-x_{a})}]^{2}].- [ divide start_ARG italic_ω ( 1 - italic_x start_POSTSUBSCRIPT italic_b end_POSTSUBSCRIPT ) end_ARG start_ARG 2 italic_β start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT italic_x start_POSTSUBSCRIPT italic_b end_POSTSUBSCRIPT ( italic_x start_POSTSUBSCRIPT italic_b end_POSTSUBSCRIPT - italic_x start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT ) end_ARG - divide start_ARG 2 italic_s end_ARG start_ARG ( 1 - italic_x start_POSTSUBSCRIPT italic_b end_POSTSUBSCRIPT ) ( italic_x start_POSTSUBSCRIPT italic_b end_POSTSUBSCRIPT - italic_x start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT ) end_ARG ] start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ] .
|
Here we did not give S explicitly, since it is given in the equation given above, and does not appear anymore.
|
We first write the metric as is given in [18], which is equivalent to the one given in [11]. We follow some of the work in [18] in the first part, and [6] in the second part of this section.
|
Here we start with the metric as given in [18] and try to write the wave equation, in the background of this metric, as given in [6], in the standard form given by [20]. How to perform this task is described in [21], as quoted by [22], and used meticulously by [23, 24]. The same method is recently used in [25, 36, 37, 38]. We need the standard form of the wave equation to be able to apply the information given in the existing literature to our work.
|
Note that this last transformation is one of the transformations which does not change the Heun form of the differential equation.
|
A
|
The expanders are physically realized as diffractive optical elements (DOE). Fabricating the DOEs consists of several stages. The first stage consists of etching the negative of the desired pattern onto a substrate. This etching is performed with laser beam lithography. The etched substrate forms a stamp which is then pressed onto a resin mold that is mounted on a glass substrate. The resin itself contains the final pattern. The resin has a wavelength dependent refractive index that we incorporate into our design framework. For the resin we used, the refractive indices are 1.5081 for 660 nmtimes660nm660\text{\,}\mathrm{n}\mathrm{m}start_ARG 660 end_ARG start_ARG times end_ARG start_ARG roman_nm end_ARG, 1.5159 for 517 nmtimes517nm517\text{\,}\mathrm{n}\mathrm{m}start_ARG 517 end_ARG start_ARG times end_ARG start_ARG roman_nm end_ARG, and 1.5223 for 450 nmtimes450nm450\text{\,}\mathrm{n}\mathrm{m}start_ARG 450 end_ARG start_ARG times end_ARG start_ARG roman_nm end_ARG. See Supplementary Note 8 for details.
|
While our experimental prototype was built for a HOLOEYE-PLUTO which possesses a 1K-pixel resolution, corresponding to a 1 mm eyebox with 75.6∘superscript75.675.6^{\circ}75.6 start_POSTSUPERSCRIPT ∘ end_POSTSUPERSCRIPT horizontal and vertical FOV, the improvement in hologram fidelity persists across resolutions. Irrespective of the resolution of the SLM, performing 4×4\times4 ×, 16×16\times16 ×, or 64×64\times64 × étendue expansion with neural étendue expanders results in a similar margin of improvement over uniform and binary random expanders. This is because the improvement in fidelity depends only on the étendue expansion factor. To validate this we simulate an 8K-pixel SLM with 64×64\times64 × étendue expansion and we verify that the improvement in fidelity is maintained. See Supplementary Note 6 for results and further details. Thus, neural étendue expansion enables high fidelity expansion for 64×64\times64 × étendue expansion for 8K-pixel SLMs[30], providing étendue to cover 85% of the human stereo FOV[31] with a 18.5 mm eyebox size, see Supplementary Note 3 for details.
|
We used PyTorch to design and evaluate the neural étendue expanders. See Supplementary Notes 2 and 3 for details on the optimization framework, evaluation, and analysis.
|
We evaluated the neural étendue expanders using a prototype holographic display. The prototype consists of a HOLOEYE-PLUTO SLM, a 4F system, a DC block, and a camera for imaging the étendue expanded holograms. See Supplementary Notes 9 and 10 for details.
|
We validate neural étendue expansion experimentally with a holographic display prototype. See Fig. 2a for a schematic of the hardware prototype and Supplementary Notes 9 and 10 for further details on the experimental setup.
|
C
|
(Thöne et al. 2011), to the establishment of the ultra-long-duration GRB class (Levan et al. 2014). GRB 111209A was found to be
|
(Thöne et al. 2011), to the establishment of the ultra-long-duration GRB class (Levan et al. 2014). GRB 111209A was found to be
|
To further explore this color-change in GRB-SNe, we need to collect more observations in the rest-frame UV. This can be done by observing rare nearby events in the UV, or with deep optical observations of the more distant GRB-SNe.
|
This discovery immediately opened multiple new lines of inquiry. We now question whether all ultra-long GRBs are associated with anomalous GRB-SNe, and
|
if so, whether they are similar to SN 2011kl or outliers in other aspects. Moreover, we would like to know if such peculiar, highly luminous GRB-SNe are exclusively
|
C
|
In this tutorial review, we presented theory for reverse osmosis (RO) and electrodialysis (ED), explaining how both technologies are based on the same fundamental transport theory. This is the solution-friction (SF) theory, and for ED we solved it in the absence of convection, thus we did not discuss pressures. We used SF theory for RO but then only described neutral solutes. Finally we solved SF theory for the osmosis experiment based on ions and a charged membrane, and we compared with experimental data. For ED we also developed new equations for Donnan equilibrium that extend the standard ideal expression. We present analytical equations for current efficiency, showing that this is a process parameter, not a membrane material property. For RO we summarized the literature for SF theory for neutral solutes including also the effect of concentration polarization. The general derivation we provided of SF theory also results in the twice-extended Nernst-Planck equation which is generally applicable in describing ion flow in reverse osmosis and nanofiltration of salt solutions.
|
Water treatment generally refers to the removal of contaminants other than salts, such as organic micropollutants (OMPs), whereas desalination and deionization refer to the removal of salts, thus of ions. RO is a method that uses pressure to drive water through a membrane, keeping most of the ions and other solutes on the retentate side, producing freshwater as permeate.111In this work we alternatingly use the words ‘ion’ and ‘solute’ for the charged and uncharged species dissolved in the water. Nanofiltration (NF) is a companion technology of RO that uses lower pressures, and membranes with larger pore sizes than in RO. In NF, the rejection of monovalent ions is much lower than of divalent ions and thus divalent ions can be selectively removed. In ED, water flows through thin channels next to ion-exchange membranes (IEMs) and an applied current pulls the ions from one set of channels through the IEMs to other channels. Though ED and RO are very different and use different physical mechanisms, the underlying transport theory for flow of water and solutes is the same. Thus a generalized treatment is possible that applies to both process types. We also show that key performance indicators on the module level, of relevance for (economic) process optimization, are defined for RO and ED in the same way.
|
In general, for multi-ionic salt mixtures, and when we also include the partitioning coeffcient, ΦisubscriptΦ𝑖\Phi_{i}roman_Φ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT, possibly different between all ions, it is advisable to return to a Boltzmann equation for each ion, Eq. (34), and solve that in combination with electroneutrality in the membrane. In further model extensions, we can include how the membrane charge is a function of pH, or of the concentration of other ions, such as the \ceCa^2+-concentration just in the membrane (in case these ions adsorb), and these ion concentrations in turn depend on the Donnan potential, thus on X. Also other acid-base associations between ions can be included, such as the protonation of an ion, for instance \ceNH3 that can react to \ceNH4+, with the distribution between these two species depending on pK and local pH [42]. Furthermore, concentration profiles across the width of the channels (especially for solutions with three or more types of ions) result in ion concentrations just outside the membrane (which are the concentrations that are used in the Donnan equilibria) that can be quite different from the channel-averaged concentrations at that z-position. Other extensions are transmembrane water flow, which also modifies ion concentrations just next to the membrane. Thus, the simplified model explained in this section may not suffice for all conditions relevant in practical application.
|
Topics that we did not address in this tutorial review are first of all that both for RO and NF we must implement the Nernst-Planck equation for ions and a charged membrane in a full module calculation, and beyond that extend the theory from simple 1:1 salt solutions to multi-ionic solutions, also for electrodialysis. Even the addition of one extra type of anion or cation can significantly change the entire modeling framework. In addition, in real water sources also the protonation degree of ions must be considered, which depends on local pH. At high concentrations, ions also associate in ion pairs. These effects are relevant to study because for instance an ammonium ion is acted on by the electrical field, but the neutral ammonia species is not. Thus rejection of these ions is strongly pH-dependent. For very tiny pores, a related topic is the effective size of ions, that has an impact on their partitioning and their mobility within the membranes. Ions with a higher charge will be hydrated better, and are expected to be slower. State-of-the-art theory for simultaneous transport and reaction of ions (such as acid-base reactions between ions) assumes that these reactions are very fast, but it is interesting to investigate whether that is a correct assumption. Another important assumption is local electroneutrality in channels and in membranes. Especially in reverse osmosis with membranes as thin as 100 nm, it is important to know if possibly Poisson’s equation must be used to replace the assumption of local electroneutrality in the membrane.
|
An ED stack consists of many cell pairs, with electrodes on the two sides of the stack, where electronic current becomes ionic current. In this review we do not discuss the electrodes but we focus on the repeating unit of an ED stack, which is the membrane cell pair, see Fig. 1B, which consists of two membranes and two flow channels. As mentioned, in the flow channels we assume co-current flow of the water along the membranes, from inlet to exit of the channel. While the water flows through these thin channels between the membranes, ions move from the d-channels to the c-channels by transport across the membranes. This process is possible because –driven by the electrical current– anions move in one directon out of each d-channel, to be transported through membranes that are selective for anions. These are ‘anion-exchange membranes’ or AEMs, which are membranes in which the water-filled pores are lined with high concentrations of positive membrane charges (of the order of 5 M fixed membrane charge per volume of water in the membrane). The cations move in the other direction, passing cation-exchange membranes, or CEMs. These CEMs have a high concentration of fixed negative charge, again of the order of 5 M, and they preferentially allow access to cations, largely blocking the passage of anions. Each d-channel has such an AEM on one side, and a CEM on the other. The net effect of this layout of a cell pair, of a sequence of an AEM, d-channel, CEM, and c-channel, and this repeated tens to hundreds of times, is that the d-channels are being desalinated, while the salinity in the c-channels increases going from entrance to exit of the channel. Thus in the AEMs anions are the main charge carrier, while in the CEMs cations carry most of the charge. These ions in their respective membrane are the counterions (anions in an AEM, cations in a CEM). The minority ions, that ideally are fully blocked, are called coions (cations in AEM, anions in CEM). Though in ED there will also be water flowing through the membrane, we will neglect that aspect in this tutorial. In this case, water recovery, wr, is directly set by how much of the feedwater flows to the c-channels and how much to the d-channels, namely wr=1/(1+ϕ/v,cϕ)v,d\text{{wr}}=1/\left(1+\phi\mathrm{{}_{v,c}}/\phi\mathrm{{}_{v,d}}\right)wr = 1 / ( 1 + italic_ϕ start_FLOATSUBSCRIPT roman_v , roman_c end_FLOATSUBSCRIPT / italic_ϕ start_FLOATSUBSCRIPT roman_v , roman_d end_FLOATSUBSCRIPT ). Thus, for equal flow rates to the two channels, we have wr=0.50wr0.50\text{{wr}}\!=\!0.50wr = 0.50.
|
C
|
QuantumNAT is fundamentally different from existing methods: (i) Prior work focuses on low-level numerical correction in inference only; QuantumNAT embraces more optimization freedom in both training and inference. It improves the intrinsic robustness and statistical fidelity of PQC parameters. (ii) PQC has a good built-in error-tolerance which motivates QuantumNAT’s post-measurement quantization to reduce the numerical precision of intermediate results while preserving accuracy. (iii) QuantumNAT has a small overhead (<<<2%), while others introduce high measurements, circuit complexity cost, etc. We show that existing extrapolation method is orthogonal to QuantumNAT in Section 4.
|
To improve NN efficiency, extensive work has been explored to trim down redundant bit representation in NN weights and activations (Han
|
Figure 2. Quantum Neural Networks Architecture. QNN has multiple blocks, each has an encoder to encode classical values to quantum domain, quantum layers with trainable weights, and a measurement layer that obtains classical values.
|
et al., [n. d.]) 2-class (frog, ship). MNIST, Fashion, and CIFAR use 95% images in ‘train’ split as training set and 5% as the validation set. Due to the limited real QC resources, we use the first 300 images of ‘test’ split as test set. Vowel-4 dataset (990 samples) is separated to train:validation:test = 6:1:3 and test with the whole test set. MNIST and Fashion images are center-cropped to 24×24242424\times 2424 × 24; and then down-sample to 4×\times×4 for 2- and 4-class, and 6×\times×6 for 10-class; CIFAR images are converted to grayscale, center-cropped to 28×\times× 28, and down-sampled to 4×\times× 4. All down-samplings are performed with average pooling. For vowel-4, we perform feature principal component analysis (PCA) and take 10 most significant dimensions.
|
Moreover, by sparsifying the parameter space, quantization reduces the NN complexity as a regularization mechanism that mitigates potential overfitting issues.
|
A
|
This work includes data products from the Near-Earth Object Wide-field Infrared Survey Explorer (NEOWISE), which is a project of the Jet Propulsion Laboratory/California Institute of Technology. NEOWISE is funded by the National Aeronautics and Space Administration. The Fermi-LAT Collaboration acknowledges support for LAT development, operation and data analysis from NASA and DOE (United States), CEA/Irfu and IN2P3/CNRS (France), ASI and INFN (Italy), MEXT, KEK, and JAXA (Japan), and the K.A. Wallenberg Foundation, the Swedish Research Council and the National Space Board (Sweden). Science analysis support in the operations phase from INAF (Italy) and CNES (France) is also gratefully acknowledged.
|
This work performed in part under DOE Contract DE-AC02-76SF00515. MG, PM and RS acknowledge the partial support of this research by grant 21-12-00343 from the Russian Science Foundation. KH has been supported by the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (2016R1A5A1013277 and 2020R1A2C1007219), and also financially supported during the research year of Chungbuk National University in 2021.
|
AF received funding from the German Science Foundation DFG, within the Collaborative Research Center SFB1491 “Cosmic Interacting Matters - From Source to Signal”. YY thanks the Heising–Simons Foundation for financial support. SR was supported by the Helmholtz Weizmann Research School on Multimessenger Astronomy, funded through the Initiative and Networking Fund of the Helmholtz Association, DESY, the Weizmann Institute, the Humboldt University of Berlin, and the University of Potsdam. ECK acknowledges support from the G.R.E.A.T research environment funded by Vetenskapsrådet, the Swedish Research Council, under project number 2016-06012, and support from The Wenner-Gren Foundations. MMK acknowledges generous support from the David and Lucille Packard Foundation. This work was supported by the GROWTH project funded by the National Science Foundation under Grant No 1545949.
|
This work performed in part under DOE Contract DE-AC02-76SF00515. MG, PM and RS acknowledge the partial support of this research by grant 21-12-00343 from the Russian Science Foundation. KH has been supported by the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (2016R1A5A1013277 and 2020R1A2C1007219), and also financially supported during the research year of Chungbuk National University in 2021.
|
AF received funding from the German Science Foundation DFG, within the Collaborative Research Center SFB1491 “Cosmic Interacting Matters - From Source to Signal”. YY thanks the Heising–Simons Foundation for financial support. SR was supported by the Helmholtz Weizmann Research School on Multimessenger Astronomy, funded through the Initiative and Networking Fund of the Helmholtz Association, DESY, the Weizmann Institute, the Humboldt University of Berlin, and the University of Potsdam. ECK acknowledges support from the G.R.E.A.T research environment funded by Vetenskapsrådet, the Swedish Research Council, under project number 2016-06012, and support from The Wenner-Gren Foundations. MMK acknowledges generous support from the David and Lucille Packard Foundation. This work was supported by the GROWTH project funded by the National Science Foundation under Grant No 1545949.
|
A
|
{\rm D}_{5/2}(m=-1/2)| roman_g ⟩ ∼ 4 roman_S start_POSTSUBSCRIPT 1 / 2 end_POSTSUBSCRIPT ( italic_m = - 1 / 2 ) ↔ | roman_e ⟩ ∼ 3 roman_D start_POSTSUBSCRIPT 5 / 2 end_POSTSUBSCRIPT ( italic_m = - 1 / 2 ) electronic transition with frequencies ωbsubscript𝜔𝑏\omega_{b}italic_ω start_POSTSUBSCRIPT italic_b end_POSTSUBSCRIPT and ωrsubscript𝜔𝑟\omega_{r}italic_ω start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT, respectively [1, 3, 9]. The state preparation is followed by estimation of the phonon-number probabilities Pnsubscript𝑃𝑛P_{n}italic_P start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT using the precise analysis of Rabi oscillations on the first blue motional sideband [39].
|
Figure 1: a) The mechanical oscillator corresponds to axial harmonic motion of a single 4040{}^{40}start_FLOATSUPERSCRIPT 40 end_FLOATSUPERSCRIPTCa+{}^{+}start_FLOATSUPERSCRIPT + end_FLOATSUPERSCRIPT ion localized in a linear Paul trap. The generation and analysis of states approaching idealized Fock states illustrated by their corresponding wave functions is implemented through interaction of the electronic ground |g⟩ketg|{\rm g}\rangle| roman_g ⟩ and metastable |e⟩kete|{\rm e}\rangle| roman_e ⟩ states with the quantized harmonic motion on the first motional sidebands. b) The sequence for preparation of genuine QNG states includes initialization to the electronic and motional ground state |g,0⟩ketg0|{\rm g},0\rangle| roman_g , 0 ⟩ followed by deterministic transfer of the accumulated population to number states by repetitive coherent excitation of the blue and red sidebands depicted as blue and red arrows, respectively. The spectroscopic analysis of phonon number populations Pnsubscript𝑃𝑛P_{n}italic_P start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT around the target Fock state implements an unambiguous identification of the genuine QNG features. c) Characterization of the Fock states of mechanical motion. The yellow points represent the measured populations Pnsubscript𝑃𝑛P_{n}italic_P start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT for the experimentally generated states. The thresholds for genuine n𝑛nitalic_n-phonon QNG are represented by blue points with the corresponding black numbers showing their numerical value [24, 2]. The associated blue numbers quantify the thermal depth of genuine n𝑛nitalic_n-phonon QNG. Similarly, the red points identify thresholds for observation of basic QNG aspects [35] and the associated red numbers determine their thermal depth. The green bars depict the force estimation capability of a specific model of noisy Fock states, where the probability Pnsubscript𝑃𝑛P_{n}italic_P start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT exceeding the presented threshold values certifies a metrological advantage [12] against the previous ideal Fock state |n−1⟩ket𝑛1|n-1\rangle| italic_n - 1 ⟩, while the corresponding numbers quantify the thermal depth of this advantage for the measured states.
|
The genuine n𝑛nitalic_n-phonon quantum non-Gaussianity of pure states manifests itself by the proper number of negative annuli in the Wigner function. The topology of negative regions in the Wigner function exposes the genuine n𝑛nitalic_n-order quantum non-Gaussianity because each Fock state exhibits a specific number of annuli, which is not changed by the squeezing or displacement. Note, that the reliability of such criterion holds straightforwardly only for pure states since stochastic processes affecting the states D(α)S(r)∑m=0n−1cm|n⟩𝐷𝛼𝑆𝑟superscriptsubscript𝑚0𝑛1subscript𝑐𝑚ket𝑛D(\alpha)S(r)\sum_{m=0}^{n-1}c_{m}|n\rangleitalic_D ( italic_α ) italic_S ( italic_r ) ∑ start_POSTSUBSCRIPT italic_m = 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_n - 1 end_POSTSUPERSCRIPT italic_c start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT | italic_n ⟩ can partially increase the number of negativities.
|
Fig. 1-c) analyses the exhibition of genuine n𝑛nitalic_n-phonon quantum non-Gaussianity using idealized and measured Fock states (yellow data points).
|
Figure S4: The thermal depth of the genuine n𝑛nitalic_n-phonon quantum non-Gaussianity (green points), the quantum non-Gaussianity (blue points) and negativity in the Wigner function (red points) that are exhibited by the ideal Fock states. The thermalization deteriorates the Fock states according to the map (S7). The vertical axis quantifies the maximal mean number of thermal phonons that preserves the presented quantum aspects. Note, the quantum non-Gaussianity and genuine one-phonon quantum non-Gaussianity are identical properties, and therefore their depth is the same for |1⟩ket1|1\rangle| 1 ⟩.
|
C
|
If Misubscript𝑀𝑖M_{i}italic_M start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT rejects, then fisubscript𝑓𝑖f_{i}italic_f start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT is uniformly random.
|
We may also construct oracles by joining other oracles together. For example, if we have a pair of oracles A,B:{0,1}∗→{0,1}:𝐴𝐵→superscript0101A,B:\{0,1\}^{*}\to\{0,1\}italic_A , italic_B : { 0 , 1 } start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT → { 0 , 1 }, then 𝒪=(A,B)𝒪𝐴𝐵\mathcal{O}=(A,B)caligraphic_O = ( italic_A , italic_B ) means that we define 𝒪:{0,1}∗→{0,1}:𝒪→superscript0101\mathcal{O}:\{0,1\}^{*}\to\{0,1\}caligraphic_O : { 0 , 1 } start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT → { 0 , 1 } by:
|
Let 𝒟𝒟\mathcal{D}caligraphic_D be the resulting distribution over oracles 𝒪=(A,B)𝒪𝐴𝐵\mathcal{O}=(A,B)caligraphic_O = ( italic_A , italic_B ).
|
Let 𝒟𝒟\mathcal{D}caligraphic_D be the resulting distribution over oracles 𝒪=(A,B,C)𝒪𝐴𝐵𝐶\mathcal{O}=(A,B,C)caligraphic_O = ( italic_A , italic_B , italic_C ). We will show that the statement of the theorem holds with probability 1111 over 𝒪𝒪\mathcal{O}caligraphic_O sampled from 𝒟𝒟\mathcal{D}caligraphic_D.
|
𝖯𝒪=𝖭𝖯𝒪superscript𝖯𝒪superscript𝖭𝖯𝒪\mathsf{P}^{\mathcal{O}}=\mathsf{NP}^{\mathcal{O}}sansserif_P start_POSTSUPERSCRIPT caligraphic_O end_POSTSUPERSCRIPT = sansserif_NP start_POSTSUPERSCRIPT caligraphic_O end_POSTSUPERSCRIPT with probability 1111 over 𝒪𝒪\mathcal{O}caligraphic_O.
|
B
|
Network data analysis is an important research topic in a range of scientific disciplines in recent years, particularly in the biological science, social science, physics and computer science. Many researchers aim at analyzing these networks by developing models, quantitative tools and theoretical framework to have a deeper understanding of the underlying structural information. A problem in network science that is of major interest is “community detection”. The Stochastic Blockmodels (SBM) [1] is a classic model to model un-weighted networks for community detection. In SBM, every node in the same community shares the same expectation degree, which is unrealistic for real-world networks since nodes degrees vary in most real-world networks. To overcome this limitation of SBM, the popular model Degree Corrected Stochastic Blockmodels (DCSBM) proposed in [2] considers node heterogeneity to extend SBM by allowing that nodes in the same community can have various expectation degrees. Many community detection methods and theoretical studies have been developed under SBM and DCSBM, to name a few, [3, 4, 5, 6, 7, 8], and references therein.
|
In this paper, we introduced the Degree-Corrected Distribution-Free Model (DCDFM), a model for community detection on weighted networks. The proposed model is an extension of previous Distribution-Free Models by incorporating node heterogeneity to model real-world weighted networks in which nodes degrees vary, and it also extends the classical degree-corrected stochastic blockmodels to weighted networks by allowing connectivity matrix to have negative elements and allowing elements of adjacency matrix A𝐴Aitalic_A generated from arbitrary distribution as long as the expectation adjacency matrix ΩΩ\Omegaroman_Ω enjoys the block structure in Eq (5). We develop an efficient spectral algorithm for estimating nodes labels under DCDFM by applying k-means algorithm on all rows in the normalized eigenvectors of the adjacency matrix. Theoretical results obtained by delicate spectral analysis guarantee that the algorithm is asymptotically consistent. The distribution-free property of our model allows that we can analyze the behaviors of our algorithm when ℱℱ\mathcal{F}caligraphic_F is set as different distributions. When DCDFM degenerates to DFM or DCSBM, our theoretical results match those under DFM or DCSBM. Numerical results of both simulated and empirical weighted networks demonstrate the advantage of our method designed by considering the effect of node heterogeneities. Meanwhile, to compare performances of different methods on weighted networks with unknown information on nodes communities, we proposed the general modularity as an extension of Newman’s modularity. Results of simulated weighted networks and real-world un-weighted networks suggest the effectiveness of the general modularity. The tools developed in this paper can be widely applied to study the latent structural information of both weighted networks and un-weighted networks. Another benefit of DCDFM is the potential for simulating weighted networks under different distributions. Furthermore, there are many dimensions where we can extend our current work. For example, K𝐾Kitalic_K is assumed to be known in this paper. However, for most real-world weighted networks, K𝐾Kitalic_K is unknown. Thus, estimating K𝐾Kitalic_K is an interesting topic. Some possible techniques applied to estimate K𝐾Kitalic_K can be found in [46, 47, 48]. Similar as [4], studying the influence of outlier nodes theoretically for weighted networks is an interesting problem. Developing method for weighted network’s community detection problem based on modularity maximization under DCDFM similar as studied in [6] is also interesting. Meanwhile, spectral algorithms accelerated by the ideas of random-projection and random-sampling developed in [35] can be applied to handle with large-scale networks, and we can take the advantage of the random-projection and random-sampling ideas directly to weighted network community detection under DCDFM. We leave studies of these problems for our future work.
|
(a) DCDFM models weighted networks by allowing nodes within the same community to have different expectation degrees. Though the WSBM developed in [12] also considers node heterogeneity, it requires all elements of connectivity matrix to be nonnegative, and fitting it by spectral clustering is challenging. Our DCDFM inherits the advantages of DFM such that it has no constraint on distribution of adjacency matrix, allows connectivity matrix to have negative entries, and allows applying the idea of spectral clustering to fit it. Meanwhile, as an extension of DFM, similar as the relationship between SBM and DCSBM, nodes within the same community can have different expectation degrees under our DCDFM, and this ensures that DCDFM can model real-world weighted networks in which nodes have various degrees.
|
Network data analysis is an important research topic in a range of scientific disciplines in recent years, particularly in the biological science, social science, physics and computer science. Many researchers aim at analyzing these networks by developing models, quantitative tools and theoretical framework to have a deeper understanding of the underlying structural information. A problem in network science that is of major interest is “community detection”. The Stochastic Blockmodels (SBM) [1] is a classic model to model un-weighted networks for community detection. In SBM, every node in the same community shares the same expectation degree, which is unrealistic for real-world networks since nodes degrees vary in most real-world networks. To overcome this limitation of SBM, the popular model Degree Corrected Stochastic Blockmodels (DCSBM) proposed in [2] considers node heterogeneity to extend SBM by allowing that nodes in the same community can have various expectation degrees. Many community detection methods and theoretical studies have been developed under SBM and DCSBM, to name a few, [3, 4, 5, 6, 7, 8], and references therein.
|
However, most works built under SBM and DCSBM require the elements of adjacency matrix of the network to follow Bernoulli distribution, which limits the network to being un-weighted. Modeling and designing methods to quantitatively detecting latent structural information for weighted networks are interesting topics. Recent years, some Weighted Stochastic Blockmodels (WSBM) have been developed for weighted networks, to name a few, [9, 10, 11, 12, 13, 14, 15]. However, though these models for weighted networks are attractive, they always require all elements of connectivity matrix to be nonnegative or all elements of adjacency matrix must follow some specific distributions as found in [16]. Furthermore, spectral clustering is widely used to study the structure of networks under SBM and DCSBM, for example, [17, 18, 19, 20, 21, 22]. Another limitation of the above WSBMs is, it is challenging to develop methods by taking the advantage of the spectral clustering idea under these WSBMs for their complex forms or strict constraint on edge distribution. To overcome limitations of these weighted models, [16] proposes a Distribution-Free Models (DFM) which has no requirement on the distribution of adjacency matrix’s elements and allows developing methods to fit the model by taking the advantage of spectral clustering. DFM can be seen as a direct extension of SBM, and nodes within the same community under DFM shares same expectation degrees, which is unrealistic for empirical networks with various nodes degrees.
|
D
|
Whereas, F(ℬηcF(\mathcal{B}^{\textrm{c}}_{\eta}italic_F ( caligraphic_B start_POSTSUPERSCRIPT c end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_η end_POSTSUBSCRIPT, ℬpr)=η<3(η+1)2η+4\mathcal{B}_{\textrm{pr}})=\eta<\frac{3(\eta+1)}{2\eta+4}caligraphic_B start_POSTSUBSCRIPT pr end_POSTSUBSCRIPT ) = italic_η < divide start_ARG 3 ( italic_η + 1 ) end_ARG start_ARG 2 italic_η + 4 end_ARG for all η∈[0,1)𝜂01\eta\in[0,1)italic_η ∈ [ 0 , 1 ). This gives an upper bound to the achievable fidelity between a correlated non-local box and a PR box.
|
In this article, we first focus on state transformations which involve a single copy of a quantum state. We consider the intermediate regime between probabilistic and approximate transformations for which very few results have been presented so far [34, 35, 33, 36, 37]. Here, the goal is to convert a quantum state ρ𝜌\rhoitalic_ρ into another quantum state σ𝜎\sigmaitalic_σ with maximal probability, allowing for a small error in the conversion. This is relevant in many practical schemes of entanglement manipulation, as they explicitly allow for a small probability of failure and the optimal fidelity is considered in the case of success [38, 39, 40, 41, 42, 43, 44, 45, 46]. Specifically in quantum networks, the trade-off between the probability of success and the achievable fidelity is highly relevant [47, 48]. We provide general bounds on the probabilities achievable in this way, for general quantum resource theories. We also study the nature of our bounds in the asymptotic limit, giving upper bounds on the asymptotic transformation rates. For transformations between entangled states, we provide a complete solution for transformations between pure states of arbitrary dimensions and for two qubits in the case that the initial state is pure. Finally, we show that the deterministic version of the single copy bounds can be utilized to set restrictions on transformation of channels.
|
We now study the nature of our bounds in the asymptotic limit. As we show now, our single-copy bounds imply upper bounds on the asymptotic transformation rates in general resource theories. The deterministic rate for a transformation between ρ𝜌\rhoitalic_ρ and σ𝜎\sigmaitalic_σ is given by
|
Here, the infimum is taken over the set of deterministic free operations (ΛfsubscriptΛ𝑓\Lambda_{f}roman_Λ start_POSTSUBSCRIPT italic_f end_POSTSUBSCRIPT). We now generalise the above definition to the probabilistic case where the probability of success is not allowed to decay too fast (exponentially in the number of copies). We also allow for generation of small amounts of resource, quantified by M𝑀Mitalic_M. Here, small amounts mean sub-exponential in the number of copies of the state. In such a scenario, we can define asymptotic rates as follows
|
We investigated the problem of converting quantum states within general quantum resource theories, and within the theory of entanglement. In particular, we considered probabilistic transformations, allowing for a small error in the final state. For general resource theories, we obtained upper bounds on the conversion probability and fidelity in Theorem 1. These results significantly improve previously known bounds, and establish limits on the possible precision of probabilistic transformations in all quantum resource theories. As an application, we show that these bounds imply an upper bound on the asymptotic rates for various classes of states. This upper bound on rates turns out to be robust against sub-exponentially decaying (in number of copies of the state) probability of success and sub-exponential (in number of copies of the state) resource generation power of the transformation. We also show that the deterministic version of the single copy bounds can be applied for resource theories of quantum channels, which provide upper bounds on the conversion fidelity. In Theorem 3 we provide a complete solution for the stochastic-approximate transformations of arbitrary dimensional bipartite pure entangled states. In Theorem 4 we focused on two-qubit systems, and provided a complete solution to this problem if the initial state is pure. Furthermore, we apply the channel bound (Eq. (27)) to the resource theory of nonlocality, providing an upper bound for the optimal achievable fidelity between PR box and isotropic box. We show that this bound is tight and any locality preserving superchannel cannot increase the fidelity between the isotropic box and PR box.
|
D
|
We selected some radionuclides of interest to test the predictions of the presented atomic models under different conditions: atomic number (Z=4−57𝑍457Z=4-57italic_Z = 4 - 57); transition nature (allowed: 7Be, 37Ar, 54Mn, 55Fe, 109Cd and 125I; first forbidden unique: 41Ca; second forbidden unique: 138La); and availability of accurate measurements to compare with (except for 41Ca). The dominant electron-capture transition in each case was studied. Calculation of capture probabilities were performed using the recommended Q𝑄Qitalic_Q values established in the latest Atomic Mass Evaluation AME2020 Wang et al. (2021). Nuclear level energies were taken from the latest ENSDF evaluations for the following decays: 54Mn Dong and Junde (2014), 109Cd Kumar et al. (2016), 125I Katakura (2011) and 138La Chen (2017).
|
Most often, experimental values are given as relative, i.e., as a ratio of capture probabilities between two shells, instead of absolute capture probabilities, which are much more difficult to measure precisely. To unify the presentation of their comparison with the theoretical predictions, we defined the following quantities
|
Table 2: Comparison of calculated and measured capture probabilities for different isotopes considered in the present work. The three models and the experimental values are described in the text.
|
Capture probabilities from BetaShape have been compared with a selection of measurements available in the literature, concluding to the need of new high-precision measurements to validate and constrain the theoretical models Mougeot (2018). This played a crucial role in the inception of the European metrology project MetroMMC Ranitzsch et al. (2020), which was dedicated to advancing our comprehension of electron-capture decay and the subsequent processes involved in atomic relaxation. The ongoing European metrology project PrimA-LTD Pri (2024) also addresses this topic, one of its ambitions being the measurement of the 55Fe capture spectrum with unprecedented precision. Such high-precision measurements challenge the theoretical predictions, for which the accuracy of the atomic modeling is essential. Indeed, as the electron-capture process takes place inside the nucleus, the description of the electronic properties of atoms must be as precise as possible, in particular in this region of space. In addition, one can also wonder about the role played by electron correlations in the decay process. Trying to answer these two issues constitute the main goal of this work.
|
The theoretical capture probabilities of several transitions of interest have been compared with experimental values with relative uncertainties from 0.2% to 3.5%, except for 7Be (10%) and 41Ca (no existing measurement). Such a comparison covers a wide range of atomic numbers, 3≤Z≤573𝑍573\leq Z\leq 573 ≤ italic_Z ≤ 57, as well as different transition natures. KLI and MCDF predictions agree well and are better than BetaShape results for 37Ar, 54Mn, 125I and the PM/PKsubscript𝑃𝑀subscript𝑃𝐾P_{M}/P_{K}italic_P start_POSTSUBSCRIPT italic_M end_POSTSUBSCRIPT / italic_P start_POSTSUBSCRIPT italic_K end_POSTSUBSCRIPT ratio in 138La decay. BetaShape predictions are surprisingly in much better agreement with experiment in all other cases. Our understanding is that the inaccuracies of its atomic model – binding energies, hole and shaking effects – somehow compensate each other. However, it is not always true as clearly seen with 37Ar, 41Ca and 54Mn decays, without any hint to anticipate such a breakdown. New high-precision measurements are needed to explore this in detail, with more complete set of capture probabilities per radionuclide that include outer shells.
|
A
|
For US Top-500 Airport Network, it has 500×0.1400=705000.140070500\times 0.1400=70500 × 0.1400 = 70 highly mixed nodes and 500×0.7820=3915000.7820391500\times 0.7820=391500 × 0.7820 = 391 highly pure nodes.
|
For Political blogs, it has 1222×0.0393≈4812220.0393481222\times 0.0393\approx 481222 × 0.0393 ≈ 48 highly mixed nodes and 1222×0.8781≈107312220.878110731222\times 0.8781\approx 10731222 × 0.8781 ≈ 1073 highly pure nodes.
|
For US airports, it has 1572×0.0865≈13615720.08651361572\times 0.0865\approx 1361572 × 0.0865 ≈ 136 highly mixed nodes and 1572×0.8575≈134815720.857513481572\times 0.8575\approx 13481572 × 0.8575 ≈ 1348 highly pure nodes.
|
For Train bombing, it has 64×0.0938≈6640.0938664\times 0.0938\approx 664 × 0.0938 ≈ 6 highly mixed nodes and 64×0.7969≈51640.79695164\times 0.7969\approx 5164 × 0.7969 ≈ 51 highly pure nodes.
|
For Karate-club-weighted, it has 34×0.0588≈2340.0588234\times 0.0588\approx 234 × 0.0588 ≈ 2 highly mixed nodes and 34×0.7941≈27340.79412734\times 0.7941\approx 2734 × 0.7941 ≈ 27 highly pure nodes.
|
A
|
ECR acknowledges the ThinkSwiss Research Scholarship, funded by the State Secretariat for Education, Research and Innovation (SERI) for the opportunity to spend three months at the University of Zürich Institute for Computational Science; travel support from the Alexander Vyssotsky Award from the University of Virginia to present this work; and to many associates – namely Sven De Rejicke, Arjen van der Wel, Charles Steinhardt, Luca Beale, Robin Leichtnam and Hugues Lascombes – for their conversations regarding this analysis. ECR is a fellow of the International Max Planck Research School for Astronomy and Cosmic Physics at the University of Heidelberg (IMPRS-HD).
|
We acknowledge PRACE for awarding us access to MareNostrum at the Barcelona Supercomputing Center (BSC), Spain. This research was partly carried out via the Frontera computing project at the Texas Advanced Computing Center. Frontera is made possible by National Science Foundation award OAC-1818253. This work was supported in part by a grant from the Swiss National Supercomputing Centre (CSCS) under project IDs s697 and s698. We acknowledge access to Piz Daint at the Swiss National Supercomputing Centre, Switzerland under the University of Zurich’s share with the project ID uzh18. This work made use of infrastructure services provided by S33{}^{3}start_FLOATSUPERSCRIPT 3 end_FLOATSUPERSCRIPTIT (www.s3it.uzh.ch), the Service and Support for Science IT team at the University of Zürich.
|
RF acknowledges financial support from the Swiss National Science Foundation (grant no PP00P2_157591, PP00P2_194814, and 200021_188552).
|
MBK acknowledges support from NSF CAREER award AST-1752913, NSF grant AST-1910346, NASA grant NNX17AG29G, and HST-AR-15006, HST-AR-15809, HST-GO-15658, HST-GO-15901, HST-GO-15902, HST-AR-16159, and HST-GO-16226 from the Space Telescope Science Institute (STScI), which is operated by AURA, Inc., under NASA contract NAS5-26555.
|
AW received support from NSF CAREER grant 2045928; NASA ATP grants 80NSSC18K1097 and 80NSSC20K0513; HST grants AR-15057, AR-15809, GO-15902 from STScI; a Scialog Award from the Heising-Simons Foundation; and a Hellman Fellowship.
|
B
|
\uparrow}(\mathbb{R}_{+},\mathcal{M}(\mathbb{R}^{l}))( italic_φ , italic_μ ) ∈ caligraphic_C ( blackboard_R start_POSTSUBSCRIPT + end_POSTSUBSCRIPT , blackboard_R start_POSTSUPERSCRIPT italic_d end_POSTSUPERSCRIPT ) × caligraphic_C start_POSTSUBSCRIPT ↑ end_POSTSUBSCRIPT ( blackboard_R start_POSTSUBSCRIPT + end_POSTSUBSCRIPT , caligraphic_M ( blackboard_R start_POSTSUPERSCRIPT italic_l end_POSTSUPERSCRIPT ) ).
|
More precisely, let 𝕀^^𝕀\widehat{\mathbb{I}}over^ start_ARG blackboard_I end_ARG be a large deviations limit rate functions or
|
Now, let 𝕀^^𝕀\widehat{\mathbb{I}}over^ start_ARG blackboard_I end_ARG be a large deviations limit rate functions or (large deviations) LD limit points of
|
For any such a large deviation limit point 𝕀^^𝕀\widehat{\mathbb{I}}over^ start_ARG blackboard_I end_ARG, we aim to prove 𝕀^=𝕀*^𝕀superscript𝕀\widehat{\mathbb{I}}=\mathbb{I}^{*}over^ start_ARG blackboard_I end_ARG = blackboard_I start_POSTSUPERSCRIPT * end_POSTSUPERSCRIPT
|
Let 𝕀^^𝕀\widehat{\mathbb{I}}over^ start_ARG blackboard_I end_ARG be a large deviations limit point of
|
D
|
In our analysis, these will be included in the δRsubscript𝛿𝑅\delta_{R}italic_δ start_POSTSUBSCRIPT italic_R end_POSTSUBSCRIPT correction in Section 3.2.
|
For the GT one, since the matrix element is proportional to the unknown parameter r𝑟ritalic_r which has to be anyway fixed from experiment, the isospin breaking corrections do not play an important role.
|
For the subleading matrix elements, the isospin breaking corrections are not phenomenologically relevant, given the current experimental sensitivity.
|
Given the current precision of the beta decay experiments, isospin breaking effects must be taken into account in the case of the Fermi matrix element.
|
The parameter r𝑟ritalic_r, which is real by time-reversal invariance, is referred to as the ratio of GT and Fermi matrix elements in the literature. For the neutron decay r=3𝑟3r=\sqrt{3}italic_r = square-root start_ARG 3 end_ARG.
|
A
|
\theta,\phi)\rho^{Z}(\mathbf{r}).italic_c start_POSTSUPERSCRIPT italic_Z end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_n italic_l italic_m end_POSTSUBSCRIPT ( bold_r ) = ∭ start_POSTSUBSCRIPT caligraphic_R start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT end_POSTSUBSCRIPT roman_d italic_V italic_g start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ( italic_r ) italic_Y start_POSTSUBSCRIPT italic_l italic_m end_POSTSUBSCRIPT ( italic_θ , italic_ϕ ) italic_ρ start_POSTSUPERSCRIPT italic_Z end_POSTSUPERSCRIPT ( bold_r ) .
|
Here, we incorporate symmetries via the latter approach using Smooth Overlap of Atomic Positions (SOAP) descriptors that are invariant to rotation and translation. These atomic environment descriptors represent the electron density at some point r𝑟ritalic_r by the superposition of the Gaussian densities of atoms with the same atomic number Z𝑍Zitalic_Z in the neighborhood of that point
|
In this work, we use the Dscribe library Himanen et al. (2020) to obtain the descriptors. This library implements SOAP descriptors using a partial power spectrum that only includes real spherical harmonics. Because the density depends on the square of the distances between points, it is already invariant to translation. A descriptor vector, 𝐩𝐩\mathbf{p}bold_p, is formed from elements of the power spectrum
|
The original SOAP descriptors compare the local atomic environments using a kernel that is the dot product of the normalized power spectra between different configurations
|
It is important to note that, when global descriptors are employed, the total energy is no longer the simple sum of local contributions. Now, it explicitly depends on quantities that interrelate features of the whole structure. This overall description of atomic structures implicitly removes the need for descriptors that capture long-range order. Nonetheless, the resolution of the features still needs to be high enough to capture small structural changes, as mentioned earlier. The resolution of the kernel can be improved by weighting each global feature by some characteristic length, lisubscript𝑙𝑖l_{i}italic_l start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT, according to Equation 19. This improves kernel performance by allowing fine-tuning of the parameters, but at the cost of adding more complexity to the model. In the following, we employ this combination of SOAP-averaged descriptors and the RBF kernel on linear hydrogen chains, which serve as an interesting and challenging benchmark.
|
B
|
\scriptscriptstyle{Dol}}}_{r,0}(C),{\mathbb{Q}}_{\operatorname{vir}})bold_H start_POSTSUPERSCRIPT roman_BM end_POSTSUPERSCRIPT ( fraktur_M start_POSTSUPERSCRIPT roman_Dol end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_r , 0 end_POSTSUBSCRIPT ( italic_C ) , blackboard_Q start_POSTSUBSCRIPT roman_vir end_POSTSUBSCRIPT ) up to r=2𝑟2r=2italic_r = 2, and for all g≥2𝑔2g\geq 2italic_g ≥ 2, and thus confirmation of Conjecture 5.6 for r≤2𝑟2r\leq 2italic_r ≤ 2.
|
}}}_{\bullet}(C))bold_U ( fraktur_g start_POSTSUPERSCRIPT roman_Dol end_POSTSUPERSCRIPT start_POSTSUBSCRIPT ∙ end_POSTSUBSCRIPT ( italic_C ) ). The following conjecture states that the images of these inclusions already generate. This is the Dolbeault version of Conjecture 4.7, and is motivated the same way, via Conjecture 3.9. As in the case of Conjecture 4.7, it was proved during revision of this paper in [DHM22].
|
In particular, the supports of the LHS and RHS of (51) are different, and so the morphism (51) is zero. This completes the proof of the g=1𝑔1g=1italic_g = 1 version of Conjecture 4.6.
|
Using the analogues of the above results for g(C)≤1𝑔𝐶1g(C)\leq 1italic_g ( italic_C ) ≤ 1, we deduce Theorem A. We split this up into two cases
|
In the remainder of this section we check appropriate modifications of Conjecture 5.6 for the cases g(C)≤1𝑔𝐶1g(C)\leq 1italic_g ( italic_C ) ≤ 1.
|
D
|
Random walks are one of the most fundamental dynamical processes, and many studies have used random walks on networks (i.e., graphs) to gain insights into network structure and how such structure affects dynamical processes [25]. Much research has focused on standard random walks, in which the distribution of the occupation probabilities of a network’s nodes converges to a stationary distribution with all positive entries in the limit of infinitely many walker steps. It is important to understand the relationship between network structure and different types of random walks. In the present paper, we consider absorbing random walks, in which the probability to reach an “absorbing state” converges to 1111 as the number of walker steps becomes infinite. We examine dynamical processes that involve absorbing random walks, for which there is a nonzero rate (the so-called “absorption rate”) of transitioning to an absorbing state from each node of a graph.
|
Absorbing random walks have been used to develop centrality measures [14], other methods to rank the nodes of a network [46], transduction algorithms (which one can use to infer the labels of the nodes of a graph from the labels of a subset of the nodes) [7], and more. For example, Jaydeep et al. [7] proposed a transduction algorithm that uses the number of visits before absorption of an absorbing random walk as a measure of affinity between the nodes of a graph. Absorbing random walks also arise naturally in many modeling contexts, including population dynamics [11], the spread of infectious diseases on networks [21], and the propagation of content in online social networks [3]. In the setting of population dynamics, consider a collection of habitat patches that are connected through some mobility network. In this context, a random walk corresponds to an individual moving between patches and absorption corresponds to death [1, 10, 11].
|
We now introduce adaptations of InfoMap that account for the absorption rates of the nodes of a network. Our approach uses absorption-scaled graphs, which arise naturally in the context of absorbing random walks [16].
|
Figure 1: Consider an absorbing random walk on the depicted four-node network, and suppose that the absorption rate of node 2 is much larger than the absorption rates of the other nodes. Detecting communities via modularity maximization or the standard InfoMap algorithm produces a partition of the network into a single community that includes all nodes. However, the flow of an absorbing random walk is trapped in either the set {1}1\{1\}{ 1 } (in dark blue) or the set {3,4}34\{3,4\}{ 3 , 4 } (in light blue). Consequently, a partition that separates node 1 from nodes 3 and 4 better captures the dynamics of an absorbing random walk than a partition of the network into a single community.
|
Random walks are one of the most fundamental dynamical processes, and many studies have used random walks on networks (i.e., graphs) to gain insights into network structure and how such structure affects dynamical processes [25]. Much research has focused on standard random walks, in which the distribution of the occupation probabilities of a network’s nodes converges to a stationary distribution with all positive entries in the limit of infinitely many walker steps. It is important to understand the relationship between network structure and different types of random walks. In the present paper, we consider absorbing random walks, in which the probability to reach an “absorbing state” converges to 1111 as the number of walker steps becomes infinite. We examine dynamical processes that involve absorbing random walks, for which there is a nonzero rate (the so-called “absorption rate”) of transitioning to an absorbing state from each node of a graph.
|
A
|
Among our schemes, we use DP-OPT, DP-Approx and Balanced-Tree (see §IV-B) for the QNR-SP problem, and LP (Appendix A) and ITER schemes for the QNR problem. For ITER, we use three schemes:
|
ITER-DPA, ITER-Bal and ITER-SP, which iterate over DP-Approx, Balanced-Tree and SP respectively. To be comprehensive,
|
Among our schemes, we use DP-OPT, DP-Approx and Balanced-Tree (see §IV-B) for the QNR-SP problem, and LP (Appendix A) and ITER schemes for the QNR problem. For ITER, we use three schemes:
|
We observe that the performance gap between our proposed techniques and ITER-SP is higher than in the QNR-SP case, as SP picks paths based on just number of
|
the following schemes: ITER-DPA, ITER-Bal, ITER-SP, Delft-LP, and Q-Cast with the optimal LP as the benchmark for comparison (LP wasn’t feasible to run
|
A
|
Note that each of these conditions jointly concerns the system energy, the time evolution law, the initial condition, and the choice of observables.
|
If we have enough information about the system to ensure that the system satisfies one of these sufficient conditions for the realizability condition, we can obtain the exact value of the true EP from Eq. (11b), and we can obtain additional information by the methods described below in Secs. V.2–V.4.
|
In this paper, we propose a method of thermodynamic inference for relaxation processes that uses measurements in tilted equilibrium, i.e., the equilibrium under the application of external fields to the system. Our approach combines the nonstationary measurement of a few observables with the tilted equilibrium measurement of the same set of observables. From these data, our method allows us to compute the exact value of the minimum EP compatible with the nonstationary data, which constitutes a tight lower bound on the true EP over the relaxation from any intermediate distribution to the final equilibrium. Moreover, if the system satisfies a condition called realizability condition, which says that the nonstationary distribution is exactly realized as a tilted equilibrium, our method provides us with additional information about the process: the exact value of the true EP, the instantaneous EP rate, the nonstationary thermodynamic forces, and a constraint on relaxation trajectories.
|
As discussed in Sec. III.3, the realizability condition [Eq. (12)] ensures that the inequality in Eq. (11a) holds with equality, allowing the inference of the exact value of EP. We discuss three situations where the realizability condition is satisfied in Sec. V.1. Moreover, assuming the realizability condition, we can extract additional information about the relaxation process from the tilted equilibrium measurements, including the EP rate, the thermodynamic force, and a constraint on relaxation paths. We present these additional inference methods in Secs. V.2–V.4.
|
In this paper, we have developed a method of thermodynamic inference that uses tilted equilibrium measurements. The method enables us to obtain the exact value of the minimum EP Δℋm(𝜼)Δsubscriptℋm𝜼\Delta{\mathcal{H}_{\mathrm{m}}}(\bm{\eta})roman_Δ caligraphic_H start_POSTSUBSCRIPT roman_m end_POSTSUBSCRIPT ( bold_italic_η ) compatible with the observed set of expectation values 𝜼𝜼\bm{\eta}bold_italic_η. This method applies to any classical stochastic system that relaxes to equilibrium with any choice of observables. Furthermore, if we have enough information about the system to ensure that the realizability condition holds, or at least that the realizability condition is approximately satisfied, we can extract the true EP, the EP rate with its decomposition, the thermodynamic force, and a constraint on relaxation paths.
|
A
|
Example file with an instance of Ising spin-glass. The Hamiltonian of the presented problem reads H(s0,s1,s2)=−1s0s1−3s0s2+1.5s1s2𝐻subscript𝑠0subscript𝑠1subscript𝑠21subscript𝑠0subscript𝑠13subscript𝑠0subscript𝑠21.5subscript𝑠1subscript𝑠2H(s_{0},s_{1},s_{2})=-1s_{0}s_{1}-3s_{0}s_{2}+1.5s_{1}s_{2}italic_H ( italic_s start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT , italic_s start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_s start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ) = - 1 italic_s start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT italic_s start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT - 3 italic_s start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT italic_s start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT + 1.5 italic_s start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT italic_s start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT.
|
the current directory. The input file comprises rows of the form “i j J_ij”. Here, i and j are indices of variables and J_ij is the
|
(required by the base class’ sample method) and an additional keyword parameter num_solutions indicating how many solutions should be returned.
|
plugins are responsible for implementing algorithms solving instances of Ising spin–glass or QUBO models (collectively known as Binary Quadratic
|
This work is supported by the project “Near-term quantum computers Challenges, optimal implementations and applications” under Grant Number POIR.04.04.00–00–17C1/18–00, which is carried out within the Team-Net programme of the Foundation for Polish Science co-financed by the European Union under the European Regional Development Fund. We gratefully acknowledge Poland’s high-performance computing infrastructure PLGrid (HPC Centers: Cyfronet Athena) for providing computer facilities and support within computational grant no. PLG/2022/015734.
|
D
|
Coronaviruses constitute an extensive family of viruses typically responsible for causing mild to moderate upper-respiratory tract illnesses. Various coronaviruses circulate among animals, including pigs, cats, and bats. On occasion, these viruses can jump from animals to humans, leading to infections. Some of these infections have resulted in severe outbreaks, such as the SARS coronavirus (SARS-CoV). In December 2019, a novel coronavirus, COVID-19, emerged at a seafood market in Wuhan, China. By March 2020, the World Health Organization (WHO) declared the virus to be a global pandemic. COVID-19 primarily spreads through respiratory droplets or nasal discharge when an infected individual coughs or sneezes. Mathematical models play a crucial role in understanding and describing infectious diseases, both in theory and practical applications (see, for example, [6–8, 18, 34]). Developing and analyzing such models contribute significantly to unraveling transmission mechanisms and disease characteristics. Consequently, these insights enable the formulation of effective strategies for prediction, prevention, and control of infections, ensuring the well-being of populations.
|
Numerical solutions of systems are invaluable in the study of epidemic models. This section presents the numerical results of our model, shedding light on how the parameters of the deterministic model (2) and the intensity of non-Gaussian noise in the stochastic model (4) impact the dynamics. We conduct numerical experiments to illustrate the extinction and persistence of the novel coronavirus, COVID-19, in both the deterministic model and its corresponding stochastic system for comparison.
|
Environmental fluctuations have emerged as a significant factor in the study of diseases, particularly in the context of the coronavirus. Consequently, it becomes crucial to investigate the impact of random disturbances on epidemic models. Disease spread is inherently stochastic, and the introduction of stochastic noise can notably influence the likelihood of disease extinction during the early stages of an outbreak. While ordinary differential equation (ODE) models provide specific sample solutions, employing a stochastic differential equation (SDE) model allows for the exploration of the stochastic distribution of disease dynamics.
|
To date, a multitude of mathematical models describing infectious diseases through differential equations have been formulated and scrutinized to understand the dynamics of infection spread, exemplified by research on [1, 2, 3, 4]. Recently, the mathematical modeling of the COVID-19 pandemic has captivated the attention of numerous experts, including mathematicians, scientists, epidemiologists, pharmacists, and chemists. The outcomes of these endeavors have yielded several noteworthy and crucial results, as highlighted in the works of [4, 5, 6, 7, 8]
|
In this study, we explore a nonlinear stochastic COVID-19 system, incorporating the influence of non-Gaussian noise. The presence of non-Gaussian noise adds a layer of complexity to the modeling framework, allowing for a more realistic representation of the uncertainties and random fluctuations inherent in the dynamics of the COVID-19 epidemic. This consideration is crucial for a comprehensive understanding of the system’s behavior and its response to unpredictable environmental factors.
|
C
|