diff --git "a/PMC_clustering_824.jsonl" "b/PMC_clustering_824.jsonl" new file mode 100644--- /dev/null +++ "b/PMC_clustering_824.jsonl" @@ -0,0 +1,699 @@ +{"text": "This paper investigates the problem of synchronization of fractional-order complex-variable chaotic systems (FOCCS) with unknown complex parameters. Based on the complex-variable inequality and stability theory for fractional-order complex-valued system, a new scheme is presented for adaptive synchronization of FOCCS with unknown complex parameters. The proposed scheme not only provides a new method to analyze fractional-order complex-valued system but also significantly reduces the complexity of computation and analysis. Theoretical proof and simulation results substantiate the effectiveness of the presented synchronization scheme. In the past 20 years, fractional-order chaotic systems have been extensively studied due to their wide applications in the fields of secure communication, control engineering, finance, physical and mathematical science, entropy, encryption and signal processing ,2,3,4. MThe aforementioned works mainly investigated the fractional-order systems with real variables, not involving complex variables. Because complex variables that double the number of variables can generate complicated dynamical behaviors, enhance anti-attack ability and achieve higher transmission efficiency ,19,20, mInspired by the above discussions, the synchronization problem of FOCCS with unknown complex parameters was investigated in this paper. Using the inequality of the fractional derivative containing complex variable and the stability theory for fractional-order complex-valued system, we realized synchronization of such systems by constructing a suitable response system. It should be noted that we deal with the synchronization problem of fractional-order uncertain complex-variable system in complex-valued domain. That is to say, it is not necessary to separate the complex-variable system into its real and imaginary parts. This greatly reduces the complexity of computation and the difficulty of theoretical analysis.n-dimensional space. For l2-norm of Notation: Definition 1\u00a0.The fractional integral of order \u03b1 for a function f is defined as:0 and \u03b1 > 0.where t \u2265 t.The fraDefinition 2\u00a0.Caputo\u2019s fractional derivative of orderfor a functionis defined by:0 and n is a positive integer such that n \u2212 < \u03b1 < 1where t \u2265 t..Caputo\u2019Lemma 1\u00a0.Let0 andbe a continuous and derivable vector function. Then, for any time instant t \u2265 t.Letx and letandare continuously differentiable functions. If:and:whereis a positive constant. Then z(t) = 0 is asymptotically stable.Proof.\u00a0See the Remark\u00a01.Using Lemmas 2.2\u20132.3, one can directly analyze fractional order complex-variable system in the complex space.We considered a kind of FOCCS described by:Proposition\u00a01.1 such that the following inequality holds:There exists a positive constant lProof.\u00a0Given that ectively ,37.Let Proposition 2\u00a0. For the Lipschitz continuous function l2 such that the following inequality holds:\u00a0. For thProof.\u00a0For Remark\u00a02.It is easy to check that many typical FOCCSs, such as the fractional-order complex-variable Chen system , T systeChoose system (11) as the master system, then the controlled response system is given by:Theorem\u00a01.Asymptotically synchronization and parameter identification of systems (13) and (11) can be achieved under adaptive controller:and the complex update laws:whereis the error vector,is the parameter error, \u03c3, \u03b7 are two arbitrary positive constants.Proof.\u00a0From the error vector and systems (11) and (13), it yields:Using Lemma 2.1, Corollary 2.1 and Lemma 2.2, we have:Substitute Equations (15) and (16) into the inequality above, we further have:According to Lemma 2.3, one has rem 1 in , parametRemark\u00a03.In previous work ,27,28,29Remark\u00a04.If the system parameters are known, the update law will be reduced to (15) only.In this section, in order to show the effectiveness of the proposed scheme in preceding section, numerical example on fractional-order complex chaotic system will be provided. When numerically solving such systems, we first adopt the predictor\u2013corrector method by MATLAa, b, c are system parameters; let Consider the Lorenz-like fractional-order complex chaotic system with commensurate order:Recently, ref. describeia (the imaginary of a) is depicted in Field-programmable gate array (FPGA)-based implementation of chaotic oscillators has demonstrated its usefulness in the development of engineering applications in a wide variety of fields, such as: random number generators, robotics and chaotic secure communication systems, signal processing. Very recently, Pano-Azucena et al. implemena, b and c are unknown, the response system is given as follows:Taking the system (19) as master system, and assuming the parameters According to Theorem 1, the controllers and the update rules are selected as:a, b, c) = , the initial conditions In the simulation, let We studied the adaptive synchronization of FOCCS with unknown complex parameters, and proposed a method for analyzing FOCCS without separating system into real and imaginary parts. By this method, the constructed response system can be asymptotically synchronized to an uncertain drive system with a desired complex scaling diagonal matrix. The proposed synchronization scheme retains the complex nature of fractional-order complex chaotic system. It not only provides a new method of analyzing FOCCS, but also significantly decreases the complexity of computation and analysis. We hope that the work performed will be helpful to further research of nonlinear fractional order complex-variable systems."} +{"text": "For any k-CNF to d-regular -CNF is given. Here regular -CNF is a subclass of CNF, where each clause of the formula has exactly k distinct variables, and each variable occurs in exactly s clauses. A d-regular -CNF formula is a regular -CNF formula, in which the absolute value of the difference between positive and negative occurrences of every variable is at most a nonnegative integer d. We prove that for all s, such that every d-regular -CNF formula is satisfiable. In this study, s such that there exists a uniquely satisfiable d-regular -CNF formula. We further show that for d-regular Unique The r k\u22653 in . That isr k\u22653 in , Glucoser k\u22653 in . The conk-SAT denotes the promise search problem of k-SAT where the number of solutions is either 0 or 1. The harder instances should have fewer solutions. But Calabro and Paturi in [k-CNF formula has a solution is the same as that of deciding whether it has exactly one solution, both when it is promised and when it is not promised that the input formula has a solution. Thus, the research of uniquely satisfiable SAT instances is a very significant work.A natural measure of the solution space is the number of solutions. Unique aturi in proved tk distinct variables per clause and at most s occurrences of each variable. Regular -SAT problem in [k-SAT problem showed that the constrained density k-SAT problem such that(i)k-CNF instances with all random (ii)k-CNF instances with all random The of each variable have its own characteristics.In ,14, M. Wuses. In , Johannsd-regular , if F is unsatisfiable and F, a minimal unsatisfiable formula can be obtained by removing some clauses from F.If the formulas Definition\u00a01.For each Definition\u00a02.A k-CNF formula F is called a k-forced-once d-regular ((i)\u00a0there exist k variables (ii)\u00a0except for the k variables, every variable occurs in exactly s clauses, and the absolute value of the difference between positive and negative occurrences of every variable is no more than the nonnegative integer d.(iii)\u00a0F is satisfiable and for any truth assignment \u03c4 satisfying F, it holds thatWe can represent a CNF formula as a matrix. Each variable F is a CNF formula with 15 variables F isLet F is a 3-forced 0-regular -CNF formula. Each of the three variables F and is forced to be Clearly, Definition\u00a03.In the context of SAT, a reduction M is identified to be parsimonious if x and Lemma\u00a01.Let , and each new variable occurs F and the absolute value of the difference between positive and negative occurrences of each variable is no more than d. Therefor, F is turned into an unsatisfiable d-regular Z occurs positively in Every variable of (ii)All literals of (iii)Z.Every clause of Introduce a new boolean variable set Define Z does not occur in Z be x and all literals of x, every variable of s clause, and meet the d-regularity and (ii) of Definition 2 hold in d-regular . Therefore, Next, we will assess the feasibility of the construction of Lemma\u00a04.For Proof.\u00a0k-forced-once d-regular X and Y occurs in exactly every variable of (ii)Every clause of We construct a Define s clauses, and the absolute value of the difference between positive and negative occurrences of every variable is at most d. The number of unforced variables in n. So d-regular Let (ii)LetFor simplicity, let Define Step 3 We will make up the gap of the number of occurrences of every variable. Using the variables in sets X and (i)For (ii)For (iii)x in Each variable (iv)x in Each variable (v)Every clause of Step 4 Let d-regular -CNF formula is unsatisfiable, then d-regular -CNF formula is unsatisfiable, then d-regular (By 7)\u226417 in , if a -CNF formula is unsatisfiable, then d-regular (By 8)\u226429 in , all +2 in , we obtaCorollary\u00a01.For \u03a8 that has exactly one satisfying assignment.Proof.\u00a0The statement follows directly from Lemmas 5 and 6.\u2003\u25a1k-CNF formula into a d-regular . The variables in Z are sorted by their subscripts.Step 4 We construct a k-CNF formula X and Z, satisfying the following conditions.(i)Z occurs in exactly Every variable Otherwise(ii)X occurs in exactly For (iii)X occurs in exactly For (iv)X.Every clause of Step 5 We construct the formula s clauses, and the absolute value of the difference between positive and negative occurrences of every variable of d. Therefore, d-regular (Obviously, every variable of X in First, we focus on the feasibility of X and Z. The variable set X generates Z generates The variables of For X in For For X is more than the number of clauses in Obviously, the number of positive literals of Second, we will prove that the formula It is assumed that X. As a result, Obvious, the truth assignment It is assumed that We substitute Equation into \u03a83,Obviously, the truth assignment Therefore, X are forced to be Z that replaced the same variable of Z. Due to only one solution of Finally, we will explain why the polynomial-time reduction is parsimonious. If k-SAT problem is a NP-complete problem. From Theorem 5, it demonstrates that there exists a polynomial time reduction from k-SAT to d-regular (d-regular (For d-regular (F in which the positive and negative occurrences number of every variable do not exceed F is a (F must be satisfiable for k, can be satisfiable for From Lemma 6, this suggests that for d-regular (d-regular (We present the construction method of a uniquely satisfiable"} +{"text": "Correction to: Trialshttps://doi.org/10.1186/s13063-019-3351-2After publication of our article we have n\u00a0=\u20091013 per group), instead of 6000. Moreover, degrees of freedom should equal to 1, since there are two study groups.According to the calculation described in the article, sample size should be 2026 in total would be required for our study. If the attrition rate was set at 10%, a total of 2026 patients (1013 in each group) would be required.Figure"} +{"text": "A symmetric block cipher employing a substitution\u2013permutation duo is an effective technique for the provision of information security. For substitution, modern block ciphers use one or more substitution boxes (S-Boxes). Certain criteria and design principles are fulfilled and followed for the construction of a good S-Box. In this paper, an innovative technique to construct substitution-boxes using our cubic fractional transformation (CFT) is presented. The cryptographic strength of the proposed S-box is critically evaluated against the state of the art performance criteria of strong S-boxes, including bijection, nonlinearity, bit independence criterion, strict avalanche effect, and linear and differential approximation probabilities. The performance results of the proposed S-Box are compared with recently investigated S-Boxes to prove its cryptographic strength. The simulation and comparison analyses validate that the proposed S-Box construction method has adequate efficacy to generate efficient candidate S-Boxes for usage in block ciphers. Cryptography helps individuals and organizations to protect their data. For this purpose, different symmetric and asymmetric ciphers have been designed. Symmetric ciphers possess simplicity and efficiency, and consume fewer computational resources as compared to asymmetric ciphers. Symmetric ciphers have two major categories as stream and block ciphers . A streaA block cipher is particularly useful to achieve data confidentiality, which is one of the cryptography goals. It is considered as one of the most widely used tools for the provision of data security . The mosAn S-Box is a crucial part of modern-day block ciphers and is used to create a muddled ciphertext from the given plaintext. An S-Box is one of the fundamental techniques used to provide candid confusion. Confusion is the complex relationship that must be established between the plaintext and the ciphertext . The strIn , the autGenerally, a block cipher consists of many parts. An S-Box, being the lone non-linear part of a block cipher, is very useful for enhancing the security of the plaintext by creating confusion in the ciphertext. The non-linearity provided by an S-Box offers defense against linear cryptanalysis . Block cAES is one of the popular block ciphers which use S-Boxes in the encryption and decryption processes. Sahmoud et al. proposedThe Feistel structure has been used as the main construct in many of the symmetric ciphers, like DES, GOST, RC5, etc. DES and GOST each uses eight S-Boxes with sizes 6 \u00d7 4 and 4 \u00d7 4, respectively. The authors in used theOne of the desirable properties of modern block ciphers is the avalanche effect . This prChaotic cryptography is among the most interesting areas in the field of information security in the recent era as the chaotic systems possess the property of randomness . Many reAnother popular area in cryptography is DNA computing, which is being considered as a possible solution to the design of resilient ciphers. Kadhim et al. and Al-WCiphers using S-Boxes highly depend on the security of the S-Boxes. Thus, the identification of a tool to evaluate and find an S-Box with high security that can also assist in the design of efficient S-Boxes is considered critical. Wang et al. developeLinear fractional transformation (LFT) is another area which helps in the generation of better S-Boxes. Farwa et al. proposedThe techniques and methods for the generation of S-Boxes presented in the literature are either suitable for the creation of static S-boxes or are very complicated and time-consuming. Static S-Boxes have their own limitations and weaknesses. These S-Boxes may help attackers in the cryptanalysis of the captured ciphertext and hence they may reach the original plaintext. On the other hand, the methods presented in the literature that generate dynamic and key-dependent S-boxes are very complex and less efficient. Thus, the need for a simple and efficient method to generate dynamic S-Boxes exists.An S-box that helps in the security enhancement of the block cipher and resists cryptanalysis;An S-Box that is simple to construct;An S-Box that is generated dynamically using sub-keys;An S-Box that fulfills the most needed S-Box criteria, like NL, SAC, BIC, LP, DP, etc.In this paper, a novel design method for the construction of efficient S-Boxes for block ciphers is proposed. The following considerations were kept in mind while designing the proposed S-Box:The method proposed in this paper for the construction of an S-Box is an innovative one and is quite different from the approaches presented in the literature. A cubic fractional transformation is proposed for the construction of strong S-Boxes. After the S-Box was designed, a performance analysis was performed to show its strength. The proposed S-Box demonstrated a very good cryptographic strength when compared with other recently designed S-Boxes. The results indicated that the proposed S-Box is a good choice for block ciphers.The structure of the rest of the paper is as follows. Modern block ciphers employ byte substitution to replace a complete byte (one element) of a matrix with another complete byte using the substitution box (S-Box). Generally, the design of an S-Box involves nonlinear mapping, which results in bijection. Many researchers have designed S-Boxes that are cryptographically strong using such mappings. One such mapping is linear fractional transformation (LFT), which was exhaustively explored for the construction of the S-boxes ,50,51,52Z = {0, 1, \u2026\u2026, 2n \u2212 1}, both \u03b1 and \u03b2 are not 0 at the same time, and \u03b1(z)3 + \u03b2 \u2260 0 is used to construct the n \u00d7 n S-box. The nonlinear nature of CFT stimulates its usage in byte substitution. The procedure to generate the proposed S-Box for n = 8 is illustrated in In this paper, we extend the idea of LFT and construct our new transformation to generate an S-Box using another nonlinear mapping method in a simple and efficient way. We call this extended transformation cubic fractional transformation (CFT). A cubic fractional transformation is a function of the form:Z = {0, 1, \u2026, 2n \u2212 1} = {0, 1, \u2026, 28 \u2212 1} = {0, 1, 2, \u2026, 254, 255} for n = 8. Any values can be chosen for \u03b1 and \u03b2 that gratify the condition of \u03b1(z)3 + \u03b2 \u2260 0. For the sake of the calculations here, we have chosen \u03b1 = 95 and \u03b2 = 15. The CFT function, C(z), given in Equation (2) generates values of Z \u2013 {0, 106} when z \u2208 Z \u2013 {176, 184}. When z = 176, C(z) evaluates to 256 \u2209 Z. When z = 184, the denominator of Equation (2) evaluates to 0. To keep the function, C(z), bijective, we explicitly define C(z) for z \u2208 {176, 184} as conditioned in Equation (2). An example S-Box of a size of 8 \u00d7 8 is generated using a CFT function, c: To elaborate the construction of the proposed S-Box using Equation (1), let us have a specific type of cubic fractional transformation as given in Equation (2). Let This particular cubic fractional transformation of Equation (2) generates the elements of our proposed S-Box, which are organized in a 16 \u00d7 16 matrix as shown in \u03b1 and \u03b2 can be used in Equation (1) to generate an S-Box. One can choose sub-keys as the values for \u03b1 and \u03b2 to generate dynamic and key-dependent S-boxes.As mentioned above, any values for In this section, we analyze our method and S-Box given in f: yY\u2208, \u2203 a unique x \u2208 X, such that f(x) = y. For n-bit inputs, this property maps all possible 2n input values to distinct output values. In other words, when x1 \u2260 x2, then f(x1) \u2260 f(x2). All component Boolean functions (f1 to f8) of the proposed S-box are balanced (number of 1\u2019s = number of 0\u2019s). Further, all 28 output values of the S-box are distinctive where each output value \u2208 Z = {0, 1, \u2026., 255}.A function, n-bit Boolean function, f, is calculated as [fW(z) = Walsh spectrum of the coordinate Boolean function, f, which is measured as:t.z is the dot product of t and z in bit-by-bit fashion and z \u2208 {0, 1}n. The nonlinearity values, NL(f), of the Boolean functions of our S-Box are given in An S-Box operation should not be a linear mapping of an input to an output as it weakens the strength of any cipher. A high value of non-linearity provides resistance against linear cryptanalysis. The nonlinearity of an lated as ,56:(3)NLWebster et al. introducx, is inverted; this changes the output bits, y and z, independently. For greater security, efforts are made to decrease the dependence between output bits. If a given S-Box satisfies the BIC, all the component Boolean functions possess high nonlinearity and meet the SAC [The authors in introduc the SAC . Table 5Considering the nonlinearities and SAC, the average BIC values are 103.5 and 0.5, respectively. If a given S-Box is non-linear and demonstrates the SAC, it fulfills BIC . These vModern block ciphers are designed to create as much diffusion and confusion of the bits as possible for the security of data and provide a shield against different approaches that cryptanalysts adopt to obtain the plaintext. Mostly, this is achieved by S-Boxes, which provide nonlinear transformations. If an S-Box is designed with a low linear probability (LP), it is a very good cryptographic tool against linear cryptanalysis.xA and xB represent the input and output masks, respectively and Z = {0, 1, \u2026., 255}. The linear probability of an S-Box is calculated using the following equation :(4)LP=maThe maximum value of LP of our S-box is only 0.156, and thus our S-Box provides good resistance against linear cryptanalysis.x and \u0394y are the input and output differentials, respectively. An S-box with smaller differentials is better at repelling differential cryptanalysis. Differential cryptanalysis is one of the most commonly used methods to reach the plaintext. Here, the differences in the original message (plaintext) and the differences in the ciphertext are obtained. The pairing of these differences may help reach some of the key values. To defy differential cryptanalysis, a small value of differential uniformity (DU) for a given S-Box is required. Differential uniformity is calculated as :(5)DU=maA high value of non-linearity provides resistance against linear cryptanalysis . The aveAn SAC value near 0.5 is the ultimate goal of every S-Box designer. Similarly, the BIC value of our S-box is better than the BIC values of more than half of the other S-boxes. Any S-Box with a lesser value of differential probability is more resilient against differential cryptanalysis. The DP value of our S-Box is 0.039, which is better than the DP values of nine other S-Boxes and equal to the DP values of two other S-Boxes as shown in To defy linear cryptanalysis, a smaller value of LP for a given S-Box is desired by S-Box designers. The LP value of our S-Box is 0.156. Due to this small value, we can say that our S-box is resistant to linear cryptanalysis.From the above comparison it is evident that our S-Box fulfills the most needed S-Box criteria and benchmarks, like SAC, BIC, NL, LP, DP, etc., and hence possesses better cryptographic strength.In this paper, we proposed a new transformation and suggested a novel method to construct efficient S-Boxes using cubic fractional transformation. The security strength of the proposed S-Box was studied using different standard criteria. The simulation results were in accordance with other relevant S-Boxes, rationalizing the performance of our S-Box method. The performance of our S-Box was good in most of the cases when compared with other recent S-Boxes. In particular, the scores of the SAC, BIC, nonlinearity, LP, and DP of the proposed S-Box provide evidence for it as a new alternative in the S-Box design domain. The promising results of the proposed S-Box analysis make it a potential candidate for usage in modern-day block ciphers. It is worth mentioning that our method is the first to explore the cubic fractional transformation for S-Box construction. Stronger S-boxes using cubic fractional transformation, like the proposed S-Box, are expected to emerge for usage in practical systems for secure communication."} +{"text": "Since reachability is already known to be undecidable in the fragment of PS 2.0 with only release-acquire accesses (PS 2.0-ra), we consider the fragment with only relaxed accesses and promises (PS 2.0-rlx). We show that reachability under PS 2.0-rlx is undecidable in general and that it becomes decidable, albeit non-primitive recursive, if we bound the number of promises.We consider the reachability problem for finite-state multi-threaded programs under the view-switches\u201d, i.e., the number of times the processes may switch their local views of the global memory. We provide a code-to-code translation from an input program under PS 2.0 to a program under SC, thereby reducing the bounded reachability problem under PS 2.0 to the bounded context-switching problem under SC. We have implemented a tool and tested it on a set of benchmarks, demonstrating that typical bugs in programs can be found with a small bound.Given these results, we consider a bounded version of the reachability problem. To this end, we bound both the number of promises and of \u201c"} +{"text": "This paper investigates the problem of complex modified projective synchronization (CMPS) of fractional-order complex-variable chaotic systems (FOCCS) with unknown complex parameters. By a complex-variable inequality and a stability theory for fractional-order nonlinear systems, a new scheme is presented for constructing CMPS of FOCCS with unknown complex parameters. The proposed scheme not only provides a new method to analyze fractional-order complex-valued systems but also significantly reduces the complexity of computation and analysis. Theoretical proof and simulation results substantiate the effectiveness of the presented synchronization scheme. In the past twenty years, the application of fractional calculus has become a focus of attention, since fractional derivatives can more accurately describe the actual physical model. So, it has become an efficient and an excellent tool in physics, mathematical science, chemistry, control engineering, finance, signal processing and other fields ,3,4,5,6.The above-mentioned works mainly investigated the fractional-order systems with real variables, not involving complex variables. It is well known that complex variables, which double the number of variables, can generate complicated dynamical behaviors, enhance anti-attack ability and achieve higher transmission efficiency ,24,25. TInspired by the above discussions, the CMPS problem of FOCCS with unknown complex parameters is investigated in this paper. First, we present a stability theory for fractional-order uncertain nonlinear systems. Then, using this theory, the inequality proposed by Xu et al. and compn-dimensional space. l2-norm of Notation: Definition\u00a01The fractional integral of order \u03b1 for a function f is defined as0 and where t . The fraDefinition\u00a02Caputo\u2019s fractional derivative of order 0 and n is a positive integer such that where t . Caputo\u2019Lemma\u00a01.0 and Let Let x(t)Corollary\u00a01.For a scalar derivable function Lemma\u00a02.Let where Let z\u2009\u2208\u2102Lemma\u00a03. Let andwhere Proof.\u00a0By Next, we adopt contradiction to prove rem 1 in .k = 1, 2, 3, \u2026, from (7), we have:d is bounded). Obviously, Suppose that Remark\u00a01.1(t) and V2(t).Lemma 3 provides a stability criterion for the fractional-order nonlinear uncertain system by choosing a Lyapunov function that includes two parts VRemark\u00a02.Lemma 3 provides a Lyapunov-based adaptive control method for stability analysis and synchronization of fractional-order systems (FOS).Remark\u00a03.Lemma 3 is suitable for verifying the stability and stabilization controller design of FOS with unknown parameters and external disturbances.Remark\u00a04.If the FOS has no uncertainties, then Lemma 3 is still valid.Lemma\u00a04.For the fractional-order complex-variable systemswhere andwhere For the Lemma\u00a05.1 such that the following inequality holds:For all where ,46 For aLemma\u00a06.2 such that the following inequality holds:For the Lipschitz continuous function ,46 For tWe consider a kind of fractional-order complex-variable chaotic drive and response systemsDefinition\u00a03.For given drive system (8) and response system (9), it is said to achieve CMPS ifwhere For giveRemark\u00a05.Obviously, some known synchronization ways are the special cases of CMPS, such as CS, AS, PS and MPS.We consider a kind of FOCCS with unknown complex parameters asSystem (13) was chosen as the master system. In this case, we constructed the slave system as follows:Remark 2. From (13), it follows that Theorem\u00a01.Asymptotically CMPS of systems (13) and (14) can be achieved under adaptive controllerand the complex update laws:where Proof.\u00a0From the error vector and systems (13) and (14), it follows:Using Lemma 1, Corollary 1 and Lemma 2, we have:L1, l2 and L2 are three positive constants. Then, one has:Substituting Equations (16) and(17) into the inequality above, we further have:By en, from , V1(t)=eTherefore, the systems (13) and (14) can reach asymptotically CMPS under the adaptive control strategy (15\u201317). \u2610Remark\u00a06.Unlike previous works, in our proposed method, the entire analysis process is performed in the complex-valued domain, and the complex function theory is used to derive synchronization conditions without separating the original complex-valued chaotic system into two real-valued systems, which reduces the complexity of analysis and computation.Remark\u00a07.If the system parameters are known, the update law will be reduced to (16) only.In this section, in order to show the effectiveness of the proposed scheme in preceding section, numerical example on fractional-order complex chaotic system will be provided. When numerically solving such systems, we adopted the Gr\u00fcnwald\u2013Letnikov (G-L) method using MAa, b, c are system parameters, let ia (the imaginary of a) is depicted in Consider the fractional-order Chen complex chaotic system with commensurate order:Assuming the parameters The error vector a, b, c) = , the initial conditions In the simulation, let We study the CMPS of FOCCS with unknown complex parameters, and propose a method for analyzing FOCCS without separating systems into real and imaginary parts. By this method, the constructed response system can be asymptotically synchronized to an uncertain drive system with a desired complex scaling diagonal matrix. The proposed synchronization scheme retains the complex nature of fractional-order complex chaotic systems. It not only provides a new method of analyzing FOCCS but also significantly decreases the complexity of computation and analysis. In future works, we will further investigate the synchronization of FOCCS, and generalize the obtained results to more general cases, i.e., FOCCS with time delay and external disturbances."} +{"text": "They form complex communities and collectively affect host health. Recently, the advances in next-generation sequencing technology enable the high-throughput profiling of the human microbiome. This calls for a statistical model to construct microbial networks from the microbiome sequencing count data. As microbiome count data are high-dimensional and suffer from uneven sampling depth, over-dispersion, and zero-inflation, these characteristics can bias the network estimation and require specialized analytical tools. Here we propose a general framework, HARMONIES, Hybrid Approach foR MicrobiOme Network Inferences via Exploiting Sparsity, to infer a sparse microbiome network. HARMONIES first utilizes a zero-inflated negative binomial (ZINB) distribution to model the skewness and excess zeros in the microbiome data, as well as incorporates a stochastic process prior for sample-wise normalization. This approach infers a sparse and stable network by imposing non-trivial regularizations based on the Gaussian graphical model. In comprehensive simulation studies, HARMONIES outperformed four other commonly used methods. When using published microbiome data from a colorectal cancer study, it discovered a novel community with disease-enriched bacteria. In summary, HARMONIES is a novel and useful statistical framework for microbiome network inference, and it is available at Microbiota form complex community structures and collectively affect human health. Studying their relationship as a network can provide key insights into their biological mechanisms. The exponentially growing large datasets made available by next-generation sequencing (NGS) technology Metzker, , such asIn sequencing-based microbial association studies, the enormous amount of NGS data can be summarized in a sample-by-taxon count table where each entry is a proxy to the underlying true abundance. However, there is no simple relationship between the true abundances and the observed counts. Additionally, microbiome sequencing data usually have an inflated amount of zeros, uneven sequencing depths across samples, and over-dispersion. Initial attempts at constructing microbial association networks with this type of data , to infer the microbiome networks. It consists of two major steps: (1) normalization of the microbiome count data by fitting a zero-inflated negative binomial (ZINB) model with the Dirichlet process prior (DPP), (2) application of Glasso to ensure sparsity and using a stability-based approach to select the tuning parameter in Glasso. The estimated network contains the information of both the degree and the direction of associations between taxa, which facilitates the biological interpretation. We demonstrated that HARMONIES could outperform other state-of-the-art tools on extensive simulated and synthetic data. Further, we used HARMONIES to uncover unique associations between disease-specific genera from microbiome profiling data generated from a colorectal cancer study. Based on these results, HARMONIES will be a valuable statistical model to understand the complex microbial associations in microbiome studies. The Y denote the n-by-p taxonomic count matrix obtained from either the 16S rRNA or the metagenomic shotgun sequencing (MSS) technology. Each entry yij, i = 1, \u2026, n, j = 1, \u2026, p is a non-negative integer, indicating the total reads related to taxon j observed in sample i. It is recommended that all chosen taxa should be at the same taxonomic level since that mixing different taxonomic levels in the proposed model could lead to improper biological interpretation. As the real microbiome data are characterized by zero-inflation and over-dispersion, we model yij through a zero-inflated negative binomial (ZINB) model asLet i representing the proportion of \u201cextra\u201d zeros in sample i. The second component, NB, models the \u201ctrue\u201d zeros and all the nonzero observed counts. i.e., counts generated from a negative binomial (NB) distribution with the expectation of \u03bbij and dispersion 1/\u03d5j. Here, \u201ctrue\u201d zero refers to a taxon that is truly absent in the corresponding sample. The variance of the random variable from NB distribution, under the current parameterization, equals to j can lead to over-dispersion.The first component in the Equation (1) models whether zeros come from a degenerate distribution with a point mass at zero. It can be interpreted as the \u201cextra\u201d zeros due to insufficient sequencing effort. We can assume there exists a true underlying abundance for the taxon in its sample, but we fail to observe it with the mixture probability \u03c0i's and \u03d5j's, we use a Bayesian hierarchical model for parameter inference. First, we rewrite the model (1) by introducing a binary indicator variable \u03b7ij ~ Bernoulli(\u03c0i), such that yij = 0 if \u03b7ij = 1, and yij ~ NB if \u03b7ij = 0. Then, we formulate a beta-Bernoulli prior of \u03b7ij by assuming \u03c0i ~ Beta, and we let a\u03c0 = b\u03c0 = 1 to obtain a non-informative prior on \u03b7ij. We specify independent Gamma prior Ga for each dispersion parameter \u03d5j. Letting a\u03d5 = b\u03d5 = 0.001 results in a weakly informative gamma prior.To avoid explicitly fixing the value of \u03c0ij, contains the key information of the true underlying abundance of the corresponding count. As \u03bbij is affected by the varying sequencing effort across samples, we use a multiplicative characterization of the NB mean to justify the latent heterogeneity in microbiome sequencing data. Specifically, we assume \u03bbij = si\u03b1ij. Here, si is the sample-specific size factor that captures the variation in sequencing depth across samples, and \u03b1ij is the normalized abundance of taxon j in sample i.The mean parameter of the NB distribution, \u03bbsi and \u03b1ij. For example, si can be the reciprocal of the total number of reads in sample i. The resulted \u03b1ij is often called relative abundance, which represents the proportion of taxon j in sample i. In this setting, the relative abundances of all the taxa in one sample always sum up to 1. Similarly, other methods have been proposed with different constraints for normalizing the sequencing data = 0. For the outer mixtures, M is an arbitrarily large positive integer. Letting M \u2192 \u221e and defining the weight \u03c8m by the stick-breaking procedure such that \u03bdm ~ N, tm ~ Beta, and Vm ~ Beta. We further set where \u03c8A = {\u03b1ij} represents the true underlying abundance of the original count matrix. We further assume ij not only reduces the skewness of the normalized abundance, but converts the non-negative \u03b1ij to a real number. We apply the following conjugate setting to specify the priors for \u03bcj and j and j follows a non-standardized Student's t-distribution, i.e.,In our model, the normalized abundance matrix a0, b0, h0, and a0 = 2, b0 = 1 to obtain a weakly informative prior for h0 = 10 such that the normal prior on \u03bcj is fairly flat. We adopt the following prior specification for the rest model parameters. First, we assume an noninformative prior for each \u03c0i by letting a\u03c0 = b\u03c0 = 1. Next, we specify a\u03d5 = b\u03d5 = 0.001 in the Gamma prior distribution for all \u03d5j's. Then, we apply the following prior setting for the DPP: M = n/2, \u03c3s = 1, \u03c4\u03bd = 1, at = bt = 1, and am = bm = 1.As for the fixed parameters i et al. and set A, denoted as Z = log A, represents the normalized microbiome abundances on the log scale. We use Markov chain Monte Carlo (MCMC) algorithm for model parameter estimation based on the estimated \u03b7ij for each observed yij = 0 in the data. In particular, suppose that we observe L zeros in total. We calculate the marginal posterior probability of being 1 for each \u03b7l, l = 1, \u2026, L as B is the number of MCMC iteration after burn-in. This marginal posterior probability pl represents the proportion of MCMC iterations in which the lth 0 is essentially a missing value rather than the lowest count in the corresponding sample. Then, the observed zeros can be dichotomized by thresholding the L probabilities. The zeros with pl greater than the threshold are considered as \u201ctrue\" zeros in the data, whereas the rest are imputed by the corresponding posterior mean of log\u03b1j\u00b7. We used the method proposed by Newton et al. , which guarantees the imputed zeros have a Bayesian FDR to be smaller than c\u03b7,The logarithmic scale of n et al. to deterc\u03b7 = 0.01 guarantees that the Bayesian FDR to be at most 0.01. We set c\u03b7 = 0.05 for the simulation study and c\u03b7 = 0.01 for the real data analysis.In practice, a choice of G = is used to illustrate the associations among vertices V = {1, \u2026, p}, representing the p microbial taxa. E = {emk} is the collection of (undirected) edges, which is equivalently represented via a p-by-p adjacency matrix with emk = 1 or 0 according to whether vertices m and k are directly connected in G or not. GGM assumes that the joint distribution of p vertices is multivariate Gaussian N, yielding the following relationship between the dependency structure and the network: a zero entry in the precision matrix \u03a9 = \u03a3\u22121 indicates the corresponding vertices are conditional independent, and there is no edge between them in the graph G. Hence, a GGM can be defined in terms of the pairwise conditional independence. If X ~ N, thenBased on the normalized microbial abundances, we estimate their partial correlation matrix in order to construct the microbiome network under the Gaussian graphical model (GGM) framework. An undirected graph m and k, representing the degree and direction of association between two vertices, conditional on the rest variables. Consequently, learning the network is equivalent to estimating the precision matrix \u03a9. For real microbiome data, we set the taxa (on the same taxonomic level) as vertices. Hence, a zero partial correlation in the precision matrix can be interpreted as no association between the corresponding pair of taxa, while a nonzero partial correlation can be interpreted as cooperative or competing associations between that taxa pair.where \u03a9. The sparsity can be achieved by imposing l1-penalized log-likelihood,In biological applications, we often require a sparse and stable estimation of the precision matrix S is the sample covariance matrix. The coordinate descent algorithm can iteratively solve p. The estimated precision matrix is sparsistent to simulate the count table Yn \u00d7 p from a DM model, with its parameters being exp(D); (5) to mimic the zero-inflation in real microbiome data by randomly setting part of entries in the count table to zeros. Note that the data generative scheme is different from the model assumption, which is given in Equation (1). The detailed generative models are described below.We generated the simulated datasets from a Dirichlet-multinomial (DM) model using the following steps: (1) to generate the binary adjacency matrix; (2) to simulate the precision matrix and the corresponding covariance matrix; (3) to generate p-by-p adjacency matrix for the p taxa in the network. Here, the adjacency matrix was generated according to an Erd\u0151s\u2013R\u00e9nyi (ER) model. An ER model ER generates each edge in a graph G with probability \u03c1 independently from every other edge. Therefore, all graphs with p nodes and M edges have an equal probability of G correspond to the 1's in the resulted binary adjacency matrix. Next, we simulated the precision matrix \u03a9 following Peng et al. . To ensure positive definiteness of the precision matrix, we followed Peng et al. to represent the true underlying abundances D. To obtain a count matrix that fully mimics the microbiome sequencing data, we generated counts from a DM model with parameter exp(D). Specifically, we first sampled the underlying fractional abundances for the ith sample from a Dirichlet distribution. The ith underlying fractional abundance was then denoted as \u03c8i ~ Dirichlet(exp(Di\u00b7)). Next, the counts in the ith sample were generated from Multinomial. Finally, we randomly selected \u03c00% out of n \u00d7 p counts and set them to zeros to mimic the zero-inflation observed in the real microbiome data. In general, the generative process had different assumptions from the proposed method. Under the appropriate choice of parameters, the simulated count data was zero-inflated, overdispersed, and the total reads varied largely between samples. In practice, we let \u03c1 = 0.1 in the ER model. The mean parameter \u03bc of the underlying multivariate Gaussian variable was randomly sampled from a uniform distribution Unif. The number of total counts across samples Ni, i = 1, \u2026, n was sampled from a discrete uniform distribution with range . Under each combination of n, p, and \u03c00, we generated 50 replicated datasets by repeating the process above.Next, we simulated p taxa from a real microbiome dataset, the NorTA generates the synthetic data with n samples as follows: (1) to calculate the p-by-p covariance matrix \u03a30 from the input real dataset; (2) to generate an n-by-p matrix, denoted by Z0, from a multivariate Gaussian distribution with a mean of 0p1 \u00d7 and the covariance matrix of \u03a30; (3) to use standard normal cumulative distribution function to scale values in each column of Z0 within ; (4) to apply the quantile function of a ZINB distribution to generate count data from those scaled values in each column of Z0. In practice, we used R package SPIEC-EASI to implement the above data generative scheme, where the real data were from those healthy control subjects in our case study presented in section 3.2. Under each combination of n and p, we generated 50 replicated datasets.We generated synthetic data following the Normal-to-Anything (NorTA) approach proposed in Kurtz et al. . NorTA wl1 regularization. However, as discussed in section 1, representing the dependency structure by the correlation matrix may lead to the detection of spurious associations.We considered the four commonly used network learning methods. The first two methods, SPIEC-EASI-Glasso and SPIEC-EASI-mb, use the transformed microbiome abundances which are different from the normalized abundances estimated by HARMONIES. Both infer the microbial network by estimating a sparse precision matrix. The former method (SPIEC-EASI-Glasso) measures the dependency among microbiota by their partial correlation coefficients, and the latter method (SPIEC-EASI-mb) uses the \u201cneighborhood selection\u201d introduced by Meinshausen and B\u00fchlmann to constm and taxon k to be true positive if \u03c9mk \u2260 0, mk. We calculated the number of true negative (TN), false positive (FP), and false negative (FN) in a similar manner. Therefore, each tuning parameter defined a point on a ROC curve. As for the correlation-based methods, we started with ranking the absolute values in the estimated correlation matrices, denoted as We quantified the model performances on the simulated data by computing their receiver operating characteristic (ROC) curves and area under the ROC curve (AUC). For the HARMONIES or SPIEC-EASI, the network inference was based on the precision matrix. Hence, under each tuning parameter of Glasso, we calculated the number of edges being true positive (TP) by directly comparing the estimated precision matrix against the true one. More specifically, we considered an edge between taxon We further used the Matthew's correlation coefficient (MCC) to evaluate results from the simulated data. The MCC is defined asHere, the MCC was particularly suitable for evaluating network models. As the number of conditionally independent taxa pairs was assumed to be much greater than the number of dependent pairs in a sparse network, MCC was preferable to quantify the performances under such an imbalanced situation. Note that MCC ranges from , with a value close to 1 suggesting a better performance. Since each value of MCC was calculated using a given set of TP, TN, FP, and FN, we adopted the optimal choice of tuning parameter for the HARMONIES or SPIEC-EASI (with either Glasso or MB for network inference), given by StARS. As for the correlation-based methods, CClasso outputted a sparse correlation matrix. We used the result to calculate TP, TN, FP, and FN directly. For Pearson-corr, we set the threshold such that the resulted number of nonzero entries in the sparse correlation matrix was the same as the number of non-zero entries in the true sparse partial correlation matrix. In fact, this choice could favor the performance of Pearson-corr for larger sample size, as shown in section 3.1.p-values were used.To assess model performances on the synthetic datasets, we followed Kurtz et al. to use an = 60, 100, 200, or 500), total numbers of taxa (p = 40 or 60), extra percentages of zeros added . In each subfigure, the HARMONIES outperformed the alternative methods in terms of both AUC and MCC, and it maintained this advantage even with the number of sample size greatly increases. Further, a smaller sample size, a larger proportion of extra zeros added (\u03c00 = 20%), as well as a larger number of taxa in the network (p = 60), would hamper the performance of all the methods, as we expected. Two modes of SPIEC-EASI, SPIEC-EASI-Glasso, and SPIEC-EASI-mb, showed very similar performances under all the scenarios, with SPIEC-EASI-Glasso having only a marginal advantage over the other. Further, we observed that the Pearson-corr method yielded higher AUCs even than the precision matrix based methods, especially when there was a lager proportion of extra zeros or larger number of taxa in the network. This result suggested that the Pearson-corr could capture the overall rank of the signal strength in the actual network. However, under a fixed cut-off value that gave a sparse correlation network, the MCCs from the Pearson-corr were always smaller than the precision matrix based methods. Note that the cut-off value we specified for Pearson's correlation method indeed favored its performance. In general, the alternative methods considered here were able to reflect the overall rank of the signal strength by showing reasonable AUCs. However, they failed to give an accurate estimation of the network under a fixed cut-off value.n or decreasing the number of features p would improve the performance of all methods and lead to greater disparity between partial and pairwise correlation-based methods. In general, our HARMONIES maintained the best in all simulation and evaluation settings except for one case, where the SPIEC-EASI-mb only showed a marginal advantage is the third most common cancer diagnosed in both men and women in the United States .The datasets generated for the simulation study can be found in the author's Github page\u2014SJ performed the experiments. GX and AK provided resources and helpful discussions. SJ, QL, and XZ designed the experiment, performed data analysis, wrote the software, and the manuscript. SJ, YC, BY, and XZ developed the website for online implementation of HARMONIES.AK is a consultant for Merck and the principal investigator on a Norvatis sponsored study. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "Scientific Reports 10.1038/s41598-019-41394-9, published online 22 March 2019Correction to: The original version of this Article contained errors.The title included the \u201c\u2264\u201d symbol at the beginning as a result of typesetting error.In addition, as a result of figure assembly errors, panels B and F of Figure\u00a02 were duplicated from author\u2019s previous research. This figure is now correct, and the raw data underlying it is now also included in the Supplementary Information file.These have now been corrected in the HTML and PDF versions of this Article, and in the accompanying Supplemental Material."} +{"text": "Scientific Reports 10.1038/s41598-020-65076-z, published online 25 May 2020Correction to: The original version of this Article contained an error in the title of the paper, where the word \u201ccooperation\u201d was incorrectly given as \u201ccooperaion\u201d. This has now been corrected in the PDF and HTML versions of the Article and in the accompanying Supplementary Information file."} +{"text": "While equilibrium phase transitions are easily described by order parameters and free-energy landscapes, for their non-stationary counterparts these quantities are usually ill-defined. Here, we probe transient non-equilibrium dynamics of an optically pumped, dye-filled microcavity. We quench the system to a far-from-equilibrium state and find delayed condensation close to a critical excitation energy, a transient equivalent of critical slowing down. Besides number fluctuations near the critical excitation energy, we show that transient phase transitions exhibit timing jitter in the condensate formation. This jitter is a manifestation of the randomness associated with spontaneous emission, showing that condensation is a stochastic, rather than deterministic process. Despite the non-equilibrium character of this phase transition, we construct an effective free-energy landscape that describes the formation jitter and allows, in principle, its generalization to a wider class of processes. Description of non-equilibrium phase transitions is problematic, due to the absence of suitable free energy landscapes. Here, the authors experimentally show delayed photon condensation and timing jitter in a dye-filled microcavity, modelled by a non-equilibrium extension of the free-energy landscape. This relation is particularly relevant near a second-order phase transition, where all the relevant macroscopic details are described by an emergent order parameter. This is, on average, located at the minimum of the free-energy landscape3, with its neighbourhood being locally probed as the system is driven through configuration space by fluctuations . While we still lack a universal generalization of these ideas to non-equilibrium systems, Jaynes has suggested its possibility4. Some well-established, near-equilibrium stochastic descriptions of relaxation involving free-energy surfaces are known5, but are not necessarily valid far from equilibrium. A particular example in this direction has been constructed for the laser6, a fundamentally non-equilibrium system whose steady state can be described as the minimum of a properly defined effective free energy, corresponding to a detailed balance between driving and dissipation.The connection between the properties of a system at thermal equilibrium and the geometry of its free-energy landscape is a powerful concept that dates back to original ideas developed by Gibbs8 or the build-up of anti-ferromagnetic correlations in Ising models9. Non-Hamiltonian quenches contain a more general class of processes. In cold atoms, for instance, the Kibble\u2013Zurek mechanism11 is observed by evaporatively cooling the system at a finite rate, quenching the system through a BEC phase transition. We shall refer to a quench as a sudden change in one of the system parameters that brings it to a far-from-equilibrium state, without affecting its Hamiltonian.While the previous arguments are relevant for systems close to a steady state, a sudden parameter change, often called a quench, necessarily brings the system sufficiently far from equilibrium to question the validity of such approaches. The meaning of a quench depends on context and, in particular, one can distinguish between Hamiltonian and non-Hamiltonian cases. The former consist of time-dependent variations in some sort of interaction term, involved, for instance, in the Mott insulator-superfluid transition15. Following the initial observation of Bose\u2013Einstein condensation of photons16, a number of experiments on grand-canonical fluctuations17, spontaneous symmetry breaking18, emergence of long-range order19, among other aspects of equilibrium physics21 have been described. Their non-equilibrium counterparts, however, remain greatly unexplored.Photon condensates are ideal platforms to explore both equilibrium and non-equilibrium physics. A thermalizing medium, typically a dye solution, is placed inside an incoherently pumped optical microcavity. The combined rates of thermalization, pumping and cavity loss enable such a driven-dissipative system to be tuned between in- and out-of-equilibrium regimesg(2). It provides access to the statistical properties of the photon condensation transition, and is particularly relevant in non-stationary systems, when the full knowledge of individual realizations is inaccessible. While the usual stationary correlation function, g(2)(\u03c4), accurately accounts for fluctuations in steady state, g(2) is the appropriate quantity to describe the evolution of transient, non-equilibrium systems. The averaged condensate intensity as a function of time shows width broadening, a manifestation of diverging jitter in the condensate formation time upon approaching the critical excitation energy. This effect is directly witnessed by distinctive off-diagonal anti-correlations in g(2) and originates from quantum fluctuations associated with spontaneous emission. By properly defining an effective (non-equilibrium) free energy, we suggest that jitter may be a universal feature of transient phase transitions in systems obeying relatively general conditions on the convexity of their free-energy landscape.Here, we study the transient dynamics of photon condensation that follows a quench in a dye-filled optical microcavity. Besides measuring the ensemble-averaged photon number dynamics, we introduce the non-stationary, two-time, second-order correlation function \u03ba and \u0393\u2193, respectively. The essentials of the cavity dynamics are described by the density operator \u03c1, for both photons and molecules, which obeys the master equation23E and A the dye emission and absorption rates, respectively, \u0393\u2191 the incoherent pumping rate and Nmol the total number of molecules inside the cavity. In general, \u0393\u2191 is a time-dependent quantity, such as in the case of pulsed pumping. Due to the high collision rate between dye and solvent molecules, all the relevant cavity processes, including light\u2013matter interactions, are incoherent.Despite the fundamentally multi-mode character of our optical cavity, the phenomenology described here is essentially that of a single-mode system. Cavity excitations (photons and excited molecules) can be lost by two processes: mirror transmission and molecular spontaneous emission into free space, at rates f\u2009=\u2009fc, and in the absence of pumping and losses (\u03ba\u2009=\u2009\u0393\u2193\u2009=\u2009\u0393\u2191\u2009=\u20090), an equilibrium (steady state) between molecular excitations and photons is established by a principle of detailed balance13. The photon number in this equilibrium state would show a phase transition as the total number of cavity excitations, Nex\u2009=\u2009n\u2009+\u2009f\u2009Nmol, which is the control parameter, is increased. The photon number, or order parameter, ranges from a disordered phase (n\u2009\u2272\u20091) dominated by spontaneous emission to an ordered phase (n\u2009\u226b\u20091) dominated by stimulated emission. While there is, in principle, a U(1) symmetry breaking upon crossing the condensation phase transition, the full dynamics can be described simply through photon number n12. By exciting a large number of dye molecules over a short period of time, the cavity can be quenched through this phase transition to a far-from-equilibrium state. The subsequent relaxation dynamics correspond to a non-stationary, transient counterpart of the equilibrium phase transition described above. In this way, we define a transient phase transition as the evolution in configuration space after a jump across a phase transition in parameter space (a quench), where \u201cphase transition\u201d has its usual time-independent, thermodynamic meaning. This is distinct from the recently introduced concept of dynamical phase transitions25. The non-linear coupling between photons and molecular excitations occurring during this transient relaxation process gives rise to non-trivial fluctuation and correlation properties. Finally, given its lossy character, the light will transition back to the phase dominated by spontaneous emission before all excitations are lost.Mean-field rate equations are obtained by taking expectation values and neglecting correlation terms in Eq.\u00a0. The num\u2193 accounts for emission both into free space and cavity modes that do not reach the regime of stimulated emission (do not condense), which will be discussed in more detail later. Also, and despite not being relevant for the results discussed here, effects associated with the multi-mode character as well as spatially resolved molecular reservoirs have been appreciated in the context of gain clamping26 and decondensation mechanisms14.A few notes are in order regarding the multi-mode nature of our cavity. Within the single-mode approximation, the rate term \u0393\u03bcm. A 40\u2009ps laser pulse at 532\u2009nm, typically ranging from 0.5 to 2\u2009nJ in energy, is used to rapidly excite the molecules, quenching the cavity to a far-from-equilibrium state. In response, a much longer pulse (\u22731\u2009ns) of light leaks from the cavity mirrors, the exact temporal shape of which depends both on the cavity parameters as well as the number of molecular excitations that follow the pump pulse, which is the control parameter used to select the different dynamical phases. A portion of pump light is directed onto two saturated avalanche single-photon detectors (APDs), O1 and O2, where a coincident detection is used as a time stamp for the beginning of the experiment, with a measured uncertainty of about 10\u2009ps. The cavity output light is directed onto two unsaturated APDs, B1 and B2, with an average of 0.1 detections per pulse, on each detector. The experiment is conducted at a repetition rate of 11\u2009kHz. Such a low repetition rate ensures a complete decay of all excitations and statistical independence between different realizations. We describe the experimental results in the form of following three sets:zero-time statistics: full time-averaged cavity output;one-time statistics: time-resolved, but averaged over all forms of fluctuations and correlations in the cavity output;two-time statistics: unequal time, cross-correlated signal from detectors B1 and B2, providing access to fluctuations in the cavity output.The experimental configuration is sketched in Fig.\u00a0P is increased beyond a critical value Pth, as shown in Fig.\u00a014. Consequently, and despite the absence of a full thermal distribution, parallels may be drawn with Bose\u2013Einstein condensation of photons28. While BEC is only strictly defined in thermal equilibrium as the macroscopic occupation of the ground state, we are assuming here a broader concept of condensation, as discussed in such diverse fields as physics, ecology, network theory or social sciences31. This can be thought of as the process where a particular, or small set of modes, in a multi-mode system becomes macroscopically occupied while the remaining ones saturate or become depleted.We begin by demonstrating the existence of a (condensation) phase transition in the total amount of light emitted by the cavity as the excitation energy, or pump energy, \u03b1 is an empirical parameter set by fitting the light-yield curve to the mean-field rate equations. Loosely speaking, \u03b1 determines the fraction of spontaneous emission into non-modelled cavity modes and is expected to be a small contribution, which will be verified in \u201cOne-time statistics\u201d section.By counting the rate of detection events in B1 and B2, we measure the total cavity output as a function of input pulse energy , as shown in the inset of Fig.\u00a0P\u2009<\u2009Pth) display a simple exponential decay on a time scale of about \u03c40\u2009~\u20094\u2009ns, the molecular excited-state lifetime. Above threshold, stimulated emission becomes important, leading to a large increase in photon number, followed by rapid depopulation of the condensate before a final decay at the slower time scale of the molecular excited-state decay.Here, we expand the time-averaged results of the previous section to the time-dependent cavity output pulse shape, as shown in Fig.\u00a0f, and the number of cavity photons, n. In equilibrium, the molecular excitation fraction, f, cannot exceed its critical value, fc. Under non-equilibrium conditions, however, if at any instant f\u2009>\u2009fc (e.g. after a quench), the photon population will grow exponentially until f drops below fc. This exponential increase in the number of photons, resulting from the onset of stimulated emission, is accompanied by rapid de-excitation of molecules, as shown in Fig.\u00a0f\u2009Nmol, and the photon number n. We choose experimental parameters that make the number of photons comparable with f\u2009Nmol, such that the effects of this two-way coupling become more prominent. On the opposite limit of f\u2009Nmol\u2009\u226b\u2009n, which is approached for lower values of \u03bb0 (the cavity cutoff wavelength), the larger molecular reservoir becomes insensitive to photon number fluctuations, approaching the limit of a Markovian bath. This limit is at the origin of the observation of grand-canonical number statistics in photon BECs17.It is instructive at this point to reflect upon the interplay and coupled dynamics of the molecular excitation fraction, \u2193 and \u03ba to the results in Fig.\u00a0\u03ba\u2009=\u20091010\u2009s\u22121, corresponding to a cavity lifetime of 100\u2009ps, and \u0393\u2193\u2009=\u20090.998\u03930, with \u03930\u2009=\u20091\u2215\u03c40 the molecular fluorescence decay rate. Within the single-mode approximation, this means that only 0.2% of the total molecular emission goes into the condensing mode while 99.8% goes both into free space and excited modes that do not reach the regime of stimulated emission. Together with the light-yield curves in Fig.\u00a0\u03b1\u2009=\u20090.13; small as expected. The emission and absorption rates are not taken as fitting parameters but rather calculated from experimental absorption and emission data for rhodamine-6G32. The total number of molecules is calculated from the dye concentration and cavity volume to be Nmol\u2009=\u20091.9\u2009\u00d7\u2009108.We fit \u0393f approaches fc from above, the cavity dynamics become slow, as dictated by Eq. . Second-order correlations are typically described by the single-time g(2)(\u03c4) function, with \u03c4\u2009=\u2009t1\u2009\u2212\u2009t2, due to time-translation symmetry in steady-state conditions. In transient systems, however, the absence of this symmetry means that the full two-time, t1 and t2, dependence must be retained. We can then defineP is the joint probability of photon detection at times t1 and t2 in detectors B1 and B2, respectively. By marginalizing over the second detector, P(t1) and P(t2) are obtained as the single-detector probabilities. The approximation in Eq. (a\u2020(t1),\u00a0a(t2)]\u2009\u2248\u20090 or \u2329a\u2020(t)a(t)\u232a\u2009\u226b\u20091. The former is satisfied when \u2223t1\u2009\u2212\u2009t2\u2223 is larger than the coherence time , and the latter is true for large photon numbers, as verified in Fig.\u00a0Correlations and fluctuations of the cavity output can now be investigated by retaining the labelling of detection timestamps in B1 and B2. We then construct the two-time, non-stationary, second-order correlation function g(2)\u2009>\u20091) and the off-diagonal anti-correlation (g(2)\u2009<\u20091) lobes. These features are mainly a manifestation of the same kind of fluctuations\u2014jitter, or shot-to-shot timing fluctuations, in the condensate formation\u2014which become amplified near the critical excitation energy. In the remainder of this section, we discuss this effect associated with transient phase transitions.The second-order correlation function is shown in Fig.\u00a0g(2) provides immediate information on number, or intensity, fluctuations, namely g(2)\u2009\u2243\u20091\u2009+\u2009\u2329\u0394n(t)2\u232a\u2215\u2329n(t)\u232a2. As such, periods of larger fluctuations coincide with the inflection point of the average pulse shape, consistent with a condensate forming at slightly different instants in each realization of the experiment. In a microscopic picture of the cavity dynamics, spontaneously emitted photons are required to seed the condensate growth. The randomness associated with the quantum nature of spontaneous emission then leads to such shot-to-shot time fluctuations, or jitter in the condensate formation. As we shall demonstrate in the next section, these periods of larger fluctuations correspond to a passage through the convex part of an effective free-energy landscape.Let us proceed by separately analysing diagonal and anti-diagonal correlations, as shown in Fig.\u00a0g(2)\u2009<\u20091 are then an immediate witness of fluctuations in formation time. Also, it is further evidence of the two-way coupling between photons and molecules, as discussed previously.The above interpretation is further supported by the off-diagonal anti-correlation lobes, as seen in Figs.\u00a037, the two-time correlation function can be obtained via the quantum regression theorem38, as shown in Fig.\u00a042. Different classes of events are defined and drawn at random given their respective rates. These are dynamically calculated as the number of photons and molecular excitations are updated at each step. Full details of this model can be found in \u201cMethods\u201d section. In the limit of a large number of realizations, this approach is equivalent to evolving the density matrix according to Eq.\u00a0(By retaining correlations up to second-order in Eq.\u00a0 using a g to Eq.\u00a0. By retag(2). However, the quantum trajectories method allows us to easily appreciate the effect of formation jitter, depicted in Fig.\u00a0g(2) depends on both the individual pulse shapes and their uncertainty in formation time. As it turns out, the earlier forming pulses (relatively far above threshold) are of shorter duration than later forming pulses . This effect competes with the larger fluctuations in formation time closer to threshold, such that a diverging behaviour may not be extractable from the g(2) maps alone.The experiment does not allow direct access to individual trajectories, only the effect of their relative fluctuations on 17, which predicts g(2)(0)\u2009=\u20092. In the latter, a steady state is achieved by a detailed balance between cavity loss and continuous pumping, with the fluctuations being related to the coupling between the photons and the molecular grand-canonical reservoir in conditions of thermal equilibrium. There is time-translation symmetry and number fluctuations are damped within a 2\u2009ns time scale17. In contrast, the g(2) structure we identify here reflects the propagation of the initial fluctuations associated with the spontaneous emission events that trigger the growth of the condensate pulse. The system never reaches a steady state and all the dynamics are fundamentally transient. In principle, g(2), which depends on both the individual pulse shapes and their formation jitter, can even be larger than 2.Note that the fluctuations described here are of a different origin than those arising from the grand-canonical nature of a photon BEC\u03c8(t) towards its equilibrium value \u03c80 can be generically modelled by the time-dependent Landau equation45F\u2009=\u2009F(\u03c8) the near-equilibrium free energy and \u03b7 a generic Langevin stochastic force. This defines a universal class of dissipative relaxation processes5, typically valid near thermal equilibrium, where F\u2009=\u2009F(\u03c8) can be expanded in Taylor serious around \u03c80. We shall undertake here a different approach that will extend the validity of the model above into far-from-equilibrium conditions.We now develop a general treatment of the relaxation process described in \u201cResults\u201d section, thus highlighting its universal features and applicability outside the particular case of photon condensates. Relaxation of an order parameter \u03ba\u2009=\u2009\u0393\u2193\u2009=\u2009\u0393\u2191\u2009\u2261\u20090), the full non-equilibrium dynamics described by Eqs. (47\u03c8\u2009\u2192\u2009n) and total number of excitations, Nex\u2009=\u2009n\u2009+\u2009f\u2009Nmol, the control parameter, as defined earlier. By retaining the full dynamical information contained in the mean-field rate Eqs. \u03ba\u2009=\u2009\u0393\u2193\u2009= by Eqs. , which dentclass1pt{minima\u03b7 accounts for spontaneous emission, such that \u2329\u03b7(t)\u03b7(0)\u232a\u2009=\u2009f\u20092E\u20092\u03b4(t)50. This allows us to include fluctuations and beyond-mean field effects in an universal model of relaxation. An alternative, and formally equivalent, approach would be the construction of a Fokker\u2013Planck equation5 for the probability density function (PDF) Pn(t), with a drift term given by the derivative of the effective free energy, as in Eq. , and shown in Fig.\u00a0F. While the free-energy landscape is more convex for larger values of the control parameter, which increases number fluctuations, these are longer-lived close to threshold (larger jitter). This effect witnesses the complex interplay between number and time fluctuations associated with the stochastic transient relaxation process determined by Eq. curvature in the free energy tends to localise it. At a second-order phase transition, the curvature at the minimum of the free energy (defining an equilibrium order parameter) vanishes and the PDF shows diverging fluctuations that persist for long times, giving rise to critical slowing down. In transient, non-equilibrium systems dramatic features also occur, with regions of negative (convex) curvature acting to amplify fluctuations. A PDF evolving through these regions while relaxing towards the free-energy minimum experiences a short-lived but large increase of fluctuations, as shown in Fig.\u00a0d by Eq. . Even in\u03ba, as in the case of the experiment described in \u201cResults\u201d section, the rate of change of the free-energy landscape depends on the photon number n, which is itself described by a given PDF. The landscape is now neither constant, nor a simple function of time, but rather coupled to the photon number history, such that for trajectories where the condensate forms early, it also decays early, leading to the anti-correlation lobes seen in g(2). This is essentially the same result as depicted by the quantum trajectories simulation in Fig.\u00a0In the presence of loss by cavity transmission, the free energy and photon number PDF are coupled in a non-trivial way. For sufficiently large Nex, the control parameter, to be fixed, such that all molecular excitations are converted into cavity photons, corresponding to the limit of negligible losses. As a final remark, Eq. (F(\u03c8), from the observed average dynamics of the order parameter, \u03c8(t), although in practice the results are not very informative.The free-energy description assumes the total number of cavity excitations ark, Eq. allows fIn this work, we have described the transient non-equilibrium dynamics of light in a dye-filled optical cavity quenched through a condensation phase transition. By rapidly exciting a large number of dye molecules, the system is brought to a far-from-equilibrium state. By averaging over all forms of fluctuations, we observed a delayed formation of the condensed phase, interpreted as a transient equivalent of critical slowing down. When quenched above the condensation threshold excitation energy, the quantum fluctuations associated with spontaneous emission seed the growth of the order parameter as the system relaxes into equilibrium. The relaxation dynamics is slower close to the critical point, a feature easily interpreted under the geometrical properties of the effective free-energy landscape, which becomes flat. The same mechanism is responsible for the usual critical slowing down in the relaxation rate of the ordered phase that follows a second-order phase transition. Also, despite the absence of latent heat and the fact that we are dealing with second-order and not first-order phase transitions, analogies can be drawn with the precipitation in supercooled, or supersaturated, liquids. Even quenched above the critical point, a seed of spontaneously emitted photons is needed to nucleate condensation, playing the role of the seeding crystals in supercooled, or supersaturated, liquids. Also, once seeded, crystallization across the entire liquid is faster for liquids quenched further across their critical parameters, with temperature playing the same role as the excitation fraction that follows the quench, in the optical cavity context.g(2). More precisely, we demonstrated that while the diagonal of g(2) is a powerful probe of the geometrical properties of the free-energy landscape, its off-diagonal elements reflect the relevant dissipation processes, with the anti-correlation lobes a joint effect of jitter and cavity loss. Fluctuations, arising from spontaneous emission, are highly amplified as the order parameter goes through the convex part of the free-energy landscape towards its equilibrium point.By measuring the statistical properties of this transient condensation, we describe a novel form of diverging fluctuations around the critical point, jitter in the formation of the ordered phase. These are witnessed by strong diagonal correlations and off-diagonal anti-correlations in the non-stationary, second-order correlation function, 54 may now benefit from being re-examined. Despite some recent efforts in describing time fluctuations and other non-equilibrium features of micro- and nano-lasers57, and to the best of our knowledge, we present here for the first time a generic and comprehensive description of the relation between temporal and number fluctuations in the non-stationary dynamics of systems undergoing second-order phase transitions. Finally, the system studied in this work, as well as the related examples stated above can be described by single-value order parameters. One may wonder on the generalization of these effects in spatially extended systems, where the order parameter is a function of both space and time. In the context of the Kibble\u2013Zurek mechanism11, for instance, most studies are simply concerned with the defect number scaling after the system relaxes to some steady state, with the intrinsic relaxation dynamics often ignored. As such, although we cannot anticipate specific effects, one wonders about the correspondence between transient fluctuation dynamics of the zero-dimensional system described in our paper and that of spatially extended systems.The description in terms of the geometric properties of the effective free-energy landscape, being independent of the microscopical details of our particular system, allows us to generalize our observations. In particular, both the transient critical slowing down and the jitter in the formation of the order parameter are expected to be universal features of the dynamics that follows a quench through a second-order phase transition. In micro- and nano-lasers, in particular, the full two-time, non-stationary analysis of the relaxation process has been greatly overlooked and previous resultsFrom the non-equilibrium model introduced in Eq.\u00a0, one cannm\u232a depends on the estimation of \u2329n2m\u232a, \u2329nm2\u232a, \u2329n3\u232a,\u00a0\u2026\u00a0, which requires solving a large number of ordinary differential equations. These can be reduced with an hierarchical set of approximations. For instance, in the semi-classical limit, the expectation values for n and m are factorized, \u2329mn\u232a\u2009\u2248\u2009\u2329m\u232a\u2329n\u232a, reducing Eq.\u00a0..3). Here37 given byIn order to account for correlations and fluctuations, one needs to go beyond the semi-classical approximation. In particular, the expectation values can be expanded in a hierarchical mannerx2\u232a\u2009\u2212\u2009\u2329x\u232a2, with x\u2009=\u2009{n,\u00a0m}, we explicitly writeThese represent the second, third, and fourth order cumulants, with the summation referring to all possible combination of variables. A minimal description of correlations is constructed by truncating the hierarchy at second order. In this way, and by defining g(2)(t), follow immediately asThe second-order photon correlation function at zero-time delay, 38, which allows us to calculate any quantity of the form \u2329X(t\u2009+\u2009\u03c4)Y(t)\u232a using two single-time evolutions. Let the initial state of the system be \u03c7(0), and the evolution be given by the map, \u03c7(t)\u2009=\u2009The two-time second-order correlation function can be obtained by invoking the quantum regression theorem\u03c7(0) from 0 to t, followed by the conditional state Y\u03c7(t) from t to t\u2009+\u2009\u03c4. For our cavity model, we begin by first evolving the density operator, \u03c1, from t\u2009=\u20090 to t\u2009=\u2009t1, using the second-order rate Eqs. (g(2)(t1). Second, the first-order rate Eqs. \u03c7(0) frome Eqs. (g(t1). Secate Eqs. are usedwing Eq.\u00a0, one the42. Here, the Lindblad dynamics of the density operator \u03c1 is replaced by a wavefunction whose evolution is given by a non-Hermitian effective Hamiltonian, interspersed with stochastic quantum jumps. Subsequently, evolution of \u03c1 is approximated by an ensemble average of wavefunctions, or trajectories, say z, the average of any observable is then given byThe second-order approach described above corresponds to a first-level approximation to the description of correlations and fluctuations in the cavity dynamics. Moving to higher-order expansions increases the number of ordinary differential equations needed to resolve the dynamics, which soon becomes cumbersome and impractical. An alternative approach to solve the master Eq.\u00a0 is to usJk are the jump operators defining the stochastic dynamics. In the non-equilibrium cavity model, coherences cannot be created by Jk, occuring at rates Rk:The effective non-Hermitian Hamiltonian for the non-equilibrium cavity model in Eq.\u00a0 is givenRk. The time between consecutive events is drawn from an exponential distribution, whose mean is the inverse of total rate of events. From a large ensemble of trajectories, we can calculate the non-stationary second-order correlation function, g(2), asA particular quantum trajectory is constructed by drawing a series of stochastic events, with their individual probabilities proportional to the rates Peer Review File"} +{"text": "The support of a patch is the union of the supports of the tiles that are in it. The translate of a patch P by translationally equivalent if tiling of patch. Recall that a tiling repetitive if every finite local complexity (FLC) if, for every R > 0, there are finitely many equivalence classes of R.We let 2.2.set in Delone \u03ba-set in A \u03ba-Meyer set in F of points for Delone \u03ba-sets have a meaning analogous to the colours of tiles for tilings. We define repetitivity and FLC for a Delone \u03ba-set in the same way as for tilings. A Delone set \u039b is called a 2.3.expansive if there is a constant c > 1 with d on We say that a linear map prototiles. Denote by tile-substitution (or simply substitution) with an expansive map \u03d5 if there exist finite sets digit set (b) consists of a collection of spaces and mappings as follows: H is a locally compact Abelian group, i.e. a discrete subgroup for which the quotient group H. For a subset model set in regular if the boundary of Wmodel \u03ba-set if each H is a Euclidean space, we call the model set \u039b a Euclidean model set rigid if A tiling 3.1.J, and et al. For any Note that \u25a14. here.et al., 2007(Schlottmann, 2000From the assumption of pure discrete spectrum and Remark 5.5Let Under the assumption of pure discrete spectrum, we know that myak 2019 and \u03d5 fumyak 2019. From Th\u25a1Let It is known that any regular model sets have pure discrete spectrum in quite a general setting (Schlottmann, 2000\u25a1The next example shows that the unimodularity of \u03d5 is necessary.et al. (1998et al. (1998Let us consider an example of non-unimodular substitution tiling which is studied by Baake al. 1998. This ex al. 1998 with thea to the right-hand side by the substitution and the letter b to the left-hand side. So we can get a bi-infinite sequence fixed under the substitution. A geometric substitution tiling arising from this substitution can be obtained by replacing symbols a and b in this sequence by the intervals of length a, and b, we can check The substitution matrix of the primitive two-letter substitution 6.et al. (1998et al. (1998We have mainly considered unimodular substitution tilings in this paper. Example 5.11 al. 1998. It cann al. 1998, which s"} +{"text": "The standard tool to classify ceers is provided by the computable reducibility Computable reducibility is a longstanding notion that allows classifying equivalence relations on natural numbers according to their complexity.R,\u00a0S be equivalence relations with domain R is computably reducible to S, denoted f such that, for all Let f is a computable function that reduces R to S; c-degrees are introduced in the standard way.We write universal ceers, i.e., ceers to which all other ceers are computably reducible. The degree of universal ceers is by now significantly explored: for instance, in , where s. Except for R[0], which is the empty set, for every R[s] is an equivalence relation in which almost all equivalence classes are singletons. Notice that we do not define here R with an accompanying R so that f must start with x.Let us define the Strategies for the requirements and their interactions The strategy for Q-requirement i. The reason we choose P-requirements. The requirement u,\u00a0v, both even, or both odd, which avoid the finitely many classes restrained by higher priority requirements. When found, it simply R-collapses u,\u00a0v : this ensures that R. Notice that if R-collapsing two even numbers or two odd numbers, since the construction will ensure that a necessary condition, at any stage s, for which we may have x,\u00a0y have different parity.For the R, i.e., R being in When at a stage The construction The construction is in stages: at stage s we define the approximation R[s] to R, and the approximations to the various parameters initialize a Q-requirement at stage s means to set as undefined at that stage the current value of its witness. A pair active witness for s if the pair has been appointed as a witness for inactive at the end of stage s if there are already distinct numbers s; it is active otherwise.Trequires attention at stageT is initialized; ore: s, and at the current stage there is a pair of distinct numbers u,\u00a0v bigger than all numbers in the union of all current equivalence classes of numbers u,\u00a0v are respectful of the restraint imposed by higher priority requirements);T at the end of stage s.one of the following holds, for some Stage 0 Initialize all Q-requirements. Define x; consequently A requirement Stage 1 Define x, leaving f and StageT that requires attention. Action: T is initialized, then e: choose a fresh witness T, i.e. i is bigger than all numbers so far used in the construction ; in turn, R is not re-initialized: this string automatically becomes R in the priority ordering. The strings s.The strategy for U-equivalence classes of i and The strategy for P-requirement U,\u00a0V so that there are distinct u,\u00a0v can be V-collapsed without injuring higher priority restraints. It sets up a restraint asking that the initial segment U, sufficient to keep Let us now consider a P- and Q-requirements may have to be (re-)initialized as in the proof of Theorem\u00a0V-collapses a pair of distinct numbers, then this collapse is forever, reflected in the fact that we define U and V are interchanged.Due to the action of higher priority requirements, Rrequires attention at stageR is initialized and R is a Q-requirement; orU and V): R is active at the end of s of distinct numbers of the same parity such that: (i)U[s]), and (ii)V[s] give the same equivalence classes relatively to the V-equivalence classes restrained by higher priority V-collapse of the equivalence classes of u,\u00a0v does not alter R at the end of stage s.one of the following holds, with A requirement The construction At stage s we define approximations to U[s] and V[s] will be the equivalence relations having U[s] and V[s] are equivalence relations with only finite equivalence classes and such that almost all equivalence classes are singletons; on the other hand we define Stage 0 Initialize all P- and Q-requirements; for x; consequently Stage 1 For x, keeping the values already defined at 0 for the other values of both StageR that requires attention. Action: R is initialized, then e and U and V. As in the proof of Theorem\u00a0R, i.e. i is bigger than all numbers so far used in the construction, bigger than the length of s; consequently If e and U and V. Then pick the least (by code) triple V-collapsed following the V-collapse of u and v performed by s. Consequently U and V. Define U against s. Consequently U[s] on all other pairs. and thuc-incomparable dark equivalence relations R and S have an infimum.There are E and F have an infimum.For every m-degrees. We choose c.e. sets Q.(1) Let X and T is a lower bound for R and S, and R and S are Q ensures that R and S are dark.We will show that the equivalence relations E is a lower bound of R and S. Consider a reduction E to Q, and we have E to There is only one class g(x) must be an odd number.There are two different classes S contains only one non-computable co-c.e. class, namely, the S-class X and Y implies that the set m-reducible to both X and Y, and thus would be computable), which gives a contradiction.We distinguish two cases. If the class In each of the cases above, we showed that T is the greatest lower bound of R and S.Assume that X and Y such that m-degrees a), there are T-degrees that form a minimal pair and do not contain any set of a lower (2) The proof of the second part is similar to the first one, modulo the following key modification: One needs to choose X,\u00a0Y such that For every notation This theorem is a combination of the two following results. The first one is Selivanov\u2019s result about prN-requirement, see . Moreover, R-part and the S-part of R-part of B is guaranteed by the fact that each equivalence class of the R-part of U with an equivalence class of the S-part of T to U that hits only the S-part of U,Assume that there is P-requirements are satisfied. The last lemma guarantees that, given T to U or by providing two equivalence classes that can be collapsed in U to diagonalize against Q-requirements are also satisfied because we carefully avoid, within the construction, to collapse classes of the same parity. We are in the position now to show that all By modifying the last proof, we can obtain something stronger.R is S-dark or S is R-dark, then R,\u00a0S have no If S is R-dark. Then the proof of Proposition\u00a0Suppose that T is a U by employing precisely the same construction as in Theorem\u00a0Assume that the function R-part or the S-part of the relation Thus, one of the following two cases holds.Suppose that for some Case 1. The function R-part and infinitely many classes in the S-part. Then the same argument as in the proof of Lemma\u00a0R-darkness of S.Case 2. Assume that R-part and only finitely many classes in the S-part. Then choose a computable function S to U, the c.e. set R-classes. Therefore, one can choose an infinite R-c.e. set A with the following properties: A is an R-c.e. transversal of S, which contradicts the R-darkness of S.Therefore, one can re-prove Lemma\u00a0R,\u00a0S are mutually dark equivalence relations of any complexity, then they have no sup.The above result contrasts with the fact that R is essentiallyn-dimensional if R has exactly n noncomputable equivalence classesWe finally turn to the problem of whether there are equivalence relations now from that thiR and S such that both R and S properly belong to R,\u00a0S have no Suppose that A properly belonging to the class U and V such that U and V are Q, R, and S as follows:Q,\u00a0R,\u00a0S is light and properly belongs to Fix a set T is the supremum of R and S, we have T must be essentially 1-dimensional. Let T-class. The conditions T should be reducible to F, we have either U and V. Thus, R and S have no supremum. Assume that mentclass2pt{minimWe conclude the paper with the following open question.For which"} +{"text": "Nature Communications 10.1038/s41467-020-18005-7, published online 24 August 2020.Correction to: The original version of this Article contained an error in Fig. 1c. The length of the silicon core at the center of the panel was incorrectly labelled as \u20184\u2009mm\u2019, rather than the correct \u20184\u2009\u00b5m\u2019. This has been corrected in both the PDF and HTML versions of the Article."} +{"text": "Nature Communications 10.1038/s41467-020-20265-2, published online 4 January 2021.Correction to: The original version of this Article contained an error in the title, which was previously incorrectly given as \u2018Three-phase electric power driven electoluminescent devices\u2019. The correct version states \u2018electroluminescent\u2019 in place of \u2018electoluminescent\u2019.This has been corrected in both the PDF and HTML versions of the Article."} +{"text": "We established a universality of logarithmic loss over a finite alphabet as a distortion criterion in fixed-length lossy compression. For any fixed-length lossy-compression problem under an arbitrary distortion criterion, we show that there is an equivalent lossy-compression problem under logarithmic loss. The equivalence is in the strong sense that we show that finding good schemes in corresponding lossy compression under logarithmic loss is essentially equivalent to finding good schemes in the original problem. This equivalence relation also provides an algebraic structure in the reconstruction alphabet, which allows us to use known techniques in the clustering literature. Furthermore, our result naturally suggests a new clustering algorithm in the categorical data-clustering problem. Logarithmic loss is a unique distortion measure in the sense that it allows a \u201csoft\u201d estimation (or reconstruction) of the source. Although logarithmic loss plays a crucial role in learning theory, not much work has been published regarding lossy compression until recently. A few exceptions are a line of work on multiterminal source coding ,2,3, theoptimal schemes for the two problems are the same; anda good scheme for one problem is also a good scheme for the other.In this paper, we present a new universal property of logarithmic loss in fixed-length lossy-compression problems. Consider an arbitrary fixed-length lossy-compression problem, where source and reconstruction alphabets \ud835\udcb3 and We are more precise about the \u201coptimal\u201d and \u201cgoodness\u201d of the scheme in later sections. This finding essentially implies that it is enough to consider the lossy-compression problem under logarithmic loss.The above correspondence provides new insights into the fixed-length lossy-compression problem. In general, the reconstruction alphabet in the lossy-compression problem does not have any well-defined operations. However, in the corresponding lossy compression under logarithmic loss, reconstruction symbols are probability distributions that have their own algebraic structure. Thus, under the corresponding setting, we can apply various techniques, such as the information geometric approach, clustering with Bregman divergence, and relaxation of the optimization problem. Furthermore, the equivalence relation suggests a new algorithm in the categorical data-clustering problem, where data are not in the continuous space.The remainder of the paper is organized as follows. In Notation: Uppercase X denotes a random variable, where \ud835\udcb3 denotes a set of alphabet. On the other hand, lowercase x denotes a specific possible realization of random variable X, i.e., n-dimensional random vector Suppose \ud835\udcb3 is a finite set of discrete symbols, and X with finite alphabet M messages. On the other side, a decoder In this section, we briefly introduce the basic settings of the fixed-length lossy-compression problem . In a fiFirst, we can define the code that the expected distortion is lower than a given distortion level.Definition\u00a01.An (Average distortion criterion)The minimum number of codewords required to achieve average distortion not exceeding D is defined bySimilarly, we can define the minimum achievable average distortion given number of codewords M.One may consider a stronger criterion that restricts the probability of exceeding a given distortion level.Definition\u00a02.An (Excess distortion criterion)The minimum number of codewords required to achieve excess distortion probability \u03f5, and distortion D is defined bySimilarly, we can define the minimum achievable excess distortion probability given target distortion D and number of codewords M.D and Given target distortion There exists a unique rate-distortion function achieving conditional distribution We assume that x, then, there is no difference between If We make the following benign assumptions:Define the information density of joint distribution D-tilted information that plays a key role in fixed-length lossy compression.Then, we are ready to define Definition\u00a03.The D-tilted information in where the expectation is with respect to the marginal distribution of ]\u2265H(X|X^\u22c6)X and In the above theorem, distortion Equation is the \u201cX and reconstruction M, the minimal achievable distortion is given by X and X and On the other hand, we viewed the \u201cinformation\u201d rate-distortion function differently. We considered the one-shot setting where source Remark\u00a01.In the corresponding fixed-length lossy-compression problem under logarithmic loss, the minimal achievable average distortion given number of codewords M iswhere the conditional entropy is with respect to distribution Remark\u00a02.From now on, we denote the original lossy-compression problem under given distortion measure n-dimensional binary vector where n is fixed, so the problem is in the one-shot setting. Distortion measure d is separable Hamming distortion, i.e.,M be the number of messages. Then, we are interested in optimal encoding and decoding schemes that achieve distortion In this section, we consider the memoryless Bernoulli source under Hamming distortion measure as an example of the above equivalence. Let In this scenario, the information rate-distortion function is not hard to compute :7)R(D)=R(D)=(7)RThen, the corresponding problem is the rate-distortion problem under logarithmic loss where the set of reconstruction symbols isRemark\u00a03.We can rewrite Equation in this The above equation explicitly shows the correspondence between logarithmic loss and the original distortion measure.Theorem 2 implies that, for any fixed-length lossy-compression problem, we can find an equivalent problem under logarithmic loss where optimal encoding schemes are the same. Thus, without loss of generality, we can restrict our attention to the problem under logarithmic loss with reconstruction alphabet f and g are a suboptimal encoder and decoder for the original fixed-length lossy-compression problem. Then, the theorem impliesSuppose The left-hand side of Equation (9) is the cost of suboptimality for the corresponding lossy-compression problem. On the other hand, the right-hand side is proportional to the cost of suboptimality for the original problem. In In general, reconstruction alphabet M. Note that this is a single-letter version of We alssion of coincides with Equation (10), we haveRemark\u00a04.In extend the reconstruction set from \ud835\udcb4 to find the measure The above result (13) implies that In the previous section, we obtained the optimal reconstruction symbol from the extended reconstruction alphabet, and projected it to the feasible set. In this section, instead of direct projection to \ud835\udcb4, we propose another slight extension of \ud835\udcb4, namely, log-convex hull. As we show in the following sections, the log-convex hull has interesting properties.p and q be probability distributions in p and q is given byBefore defining the log-convex hull, we need to define the log-convex combination of probability distributions. Let r is a weight vector (i.e., It is clear to see that Instead of having projection of convex, (, [TheorerI-projection of Projection rI-projection satisfies the following inequality for all Csisz\u00e1r and Mat\u00fa\u0161 (, [TheoreOn the other hand, the log-convex combination of probability measures measures . The autThe above result holds for any Remark\u00a05.The above result is similar to the projection to polytope in Euclidean space. Suppose vectors where As we saw in the previous section, we want to find Since the first term is not a function of Thus, minimizing Since the objective function is a convex function of In the corresponding lossy-compression problem under logarithmic loss, reconstruction symbols are probability measures that have a natural algebraic structure, as we discussed in k-means clustering to a lossy-compression problem [k-means clustering is only available when there exists a well-defined operation in k-means clustering requires computing the mean of data points, which is the center of each cluster. In general lossy-compression problems, reconstruction alphabet k-medoidlike clustering [k-medoidlike algorithm in the context of lossy compression is shown in Algorithm 1.Lossy compression is closely related to the clustering problem ,20,21. M problem ,23,24, w problem ,26. Howeustering , where tAlgorithm 1k-medoidlike clustering in lossy compression.\u2003repeat\u2003\u2003Set \u2003\u2003fordo\u2003\u2003\u2003\u2003\u2003end for\u2003\u2003forMdo\u2003\u2003\u2003\u2003\u2003end for\u2003until converge\u2003Randomly initialize k-meanslike clustering algorithm, as shown in Algorithm 2.On the other hand, in the corresponding problem, the reconstruction alphabet is the set of probability distributions where operations such as log-convex combinations are well-defined. This allows us to propose a Algorithm 2k-meanslike clustering in lossy compression.\u2003repeat\u2003\u2003Set \u2003\u2003fordo\u2003\u2003\u2003\u2003\u2003\u2003Set \u2003\u2003end for\u2003\u2003forMdo\u2003\u2003\u2003\u2003\u2003end for\u2003until converge\u2003Randomly initialize k-means clustering [The main idea of the above algorithm is that log-convex combination ustering ,29. The mean are not well-defined in this case, it is hard to apply known data-clustering algorithms in continuous space. The key idea is that the equivalence relation with logarithmic loss allows the algebraic structure on any set. More precisely, we can transform any clustering problem to the clustering problem in continuous space and apply known techniques such as variations of k-means.The idea of the previous section can be applied to an actual clustering problem. We mainly focus on clustering categorical data where data points are not in continuous space ,32,33,34M clusters.A more rigorous definition of the problem is given below. Assume that we have a finite set of data points \ud835\udcb3, and each data point has its weight M. Let k-means to the corresponding problem. For example, Algorithm 2 can be applied to the corresponding problem.If we let Equation . Then, wRemark\u00a06.Note that it is hard to have an exact analytic formula for k-meanslike clustering algorithm in categorical data-clustering problems.To conclude our discussion, we summarize our main contributions. We showed that for any fixed-length lossy-compression problem under an arbitrary distortion measure, there exists a corresponding lossy-compression problem under logarithmic loss where optimal schemes coincide. We also proved that a good scheme for one lossy-compression problem is also good for another problem. This equivalence provides an algebraic structure on any reconstruction alphabet that allows using various optimization techniques in lossy-compression problems, such as log-convex relaxation. Furthermore, our results naturally suggest a"} +{"text": "With the great significance of biomolecular flexibility in biomolecular dynamics and functional analysis, various experimental and theoretical models are developed. Experimentally, Debye-Waller factor, also known as B-factor, measures atomic mean-square displacement and is usually considered as an important measurement for flexibility. Theoretically, elastic network models, Gaussian network model, flexibility-rigidity model, and other computational models have been proposed for flexibility analysis by shedding light on the biomolecular inner topological structures. Recently, a topology-based machine learning model has been proposed. By using the features from persistent homology, this model achieves a remarkable high Pearson correlation coefficient (PCC) in protein B-factor prediction. Motivated by its success, we propose weighted-persistent-homology (WPH)-based machine learning (WPHML) models for RNA flexibility analysis. Our WPH is a newly-proposed model, which incorporate physical, chemical and biological information into topological measurements using a weight function. In particular, we use local persistent homology (LPH) to focus on the topological information of local regions. Our WPHML model is validated on a well-established RNA dataset, and numerical experiments show that our model can achieve a PCC of up to 0.5822. The comparison with the previous sequence-information-based learning models shows that a consistent improvement in performance by at least 10% is achieved in our current model. Biomolecular functions usually can be analyzed by their structural properties through quantitative structure-property relationship (QSPR) models (or quantitative structure-activity relationship (QSAR) models). Among all the structural properties, biomolecular flexibility is of unique importance, as it can be directly or indirectly measured by experimental tools. Debye-Waller factor or B-factor, which is the atomic mean-square displacement, provides a quantitative characterization of the flexibility and rigidity of biomolecular structures. With the strong relationship between structure flexibility and functions, various theoretical and computational methods have been proposed to model the flexibility of a biomolecular. Such methods include molecular dynamics (MD) , normal Other than the above deterministic models, data-driven machine learning models are also considered in flexibility analysis \u201329, thanMore recently, a persistent-homology (PH)-based machine learning model is proposed . In thisk-simplexes with k starting from 0. In particular, by assigning a weight value of 0 or 1 to each point, we can naturally arrive at a local PH model and element-specific PH model is divided into N bins of equal size f. The number of barcodes which are located on each bin are then counted and used as feature vector [i is defined as:bj and dj are referring to birth and death of bar j. B is referring to the collections of barcodes with \u03b1 referring to the selection of atoms and D referring to the dimension of the Betti numbers. Essentially, for each C1 atom, we have a N * 1 topological vector for each element and dimension of the Betti numbers.In this paper, we only consider topological features constructed using a binning approach . More spe vector , 55. MorAfter the topological features are represented as a feature vector, it can serve as input to predict B-factor values with ML algorithms. We consider four main ML models, namely regularized linear regression, tree-based methods (including random forest and extreme gradient boosting), support vector regression, and artificial neural networks. All our ML algorithms are implemented in Python (packages mentioned below refer to the packages in Python).n data ith sample , ith sample, and p is the number of structured features. Conventionally, we denote by \u0177 the predicted normalized B-factor value of a sample.In the following descriptions of the ML models, we assume that we train our models with x by \u0177 [Linear regression is a straightforward yet efficient approach to model the relationship between a quantitative response and features. The incorporation of regularization can effectively address the high dimensionality setting where the number of features is larger than the sample size. The variable selection feature of the regularized linear regression makes it particularly suitable for our task as our feature vector is usually lengthy. The general formulation of regularized linear regression can be read as the following regularized minimization problem:acy of \u0177 \u201360.\u03b1\u2016\u03b2\u20161), where \u03b1 is the tuning parameter that strikes the balance between efficiency and regularization. The regression problem with these two types of regularization are also known as Ridge regression [least absolute shrinkage and selection operator (LASSO) [scikit-learn\u201d [In our study, we consider the two typical choices of gression and leas (LASSO) , respectt-learn\u201d .Random Forest (RF) [Extreme Gradient Boosting (XGBoost) [Classification And Regression Tree (CART) or decisest (RF) , 64 and XGBoost) .RH. RH is an ensemble learning method that creates a variety of decision (regression) trees independently during training, where each decision tree is constructed using a random subset of the features as split candidates. During the training of each tree, the split at each node is determined by the least-square method. In other words, for each region of each tree, we predict the B-factor value with the average of the B-factor values of the samples fallen in the region. In a regression RF, the final prediction is the average of the predicted values of all individual trees. In the implementation of ensemble trees, the number of trees, minimum number of samples at each leaf node, and the number of split candidates in each splitting, i.e., parameter mtry, are all tuning parameters. In our application of RF, we choose scikit-learn\u201d [ Breiman and tunet-learn\u201d .XGBoost. Has been one of the popular ML tools used by the winning teams of many ML challenges and competitions, such as the Netflix prize [xgboost\u201d [ix prize and varixgboost\u201d .\u03b20 + x\u22a4\u03b2 that has at most \u03f5 deviation from the actual target values yi for all the training data while trying to be as flat as possible [\u03b20, \u03b2) is determined by the following minimization problem:\u03be and C determines the trade-off between the efficiency and the amount up to which deviation larger than \u03f5 is tolerable. Typically, we adopt kernel methods to transform the input features from a lower to a higher dimensional space, where a linear fit is feasible. Common choices of kernel include polynomial kernel, Gaussian kernel, and radial basis function (RBF) kernel. In our study, we have opted to use RBF kernel, i.e., scikit-learn\u201d [SVR , as a vepossible . Sometimpossible . The SVRt-learn\u201d .ANN has been proved to be capable of learning to recognize patterns or categorize input data after training on a set of sample data from the domain . The abikeras\u201d [In our study, the number of hidden layers, number of nodes in each hidden layer, and the number of epochs are treated as hyperparameters. The hidden and output activation functions are set as sigmoid and leaky ReLU functions respectively. The dropout rate is set to 20% and the remaining hyperparameters are set to default values as defined by the package. ANN is implemented with the package \u201ckeras\u201d .We consider the same RNA dataset and data preprocessing by Guruge et al. . The chaThe values of B-factors may differ significantly from chain to chain due to reasons such as a relatively small number of residues in a protein chain or differences in refinement methods used . Thus, tE, F/E ratio, and bin size f are the hyperparameters to be optimized. We chose the value of E to be in the range from 10 \u00c5 to 45 \u00c5 with a stepsize of 5 \u00c5, i.e., E = {10 \u00c5, 15 \u00c5, 20 \u00c5, 25 \u00c5, 30 \u00c5, 35 \u00c5, 40 \u00c5, 45 \u00c5}. The filtration interval F is defined such that the ratio of F/E is between 0.5 to 1.0 with a stepsize of 0.1, i.e., F/E = {0.5, 0.6, 0.7, 0.8, 0.9, 1.0}. Bin size f is chosen to be in the range from 0.15 \u00c5 to 1.50 \u00c5, i.e., f = {0.15 \u00c5, 0.50 \u00c5, 1.00 \u00c5, 1.50 \u00c5}. A total of 32,823 PBs are generated based on the Vietoris-Rips complex for each combination of element type, E and F/E ratio. Both \u201cGUDHI\u201d [Dionysus\u201d [In our dataset, cut-off distance \u201cGUDHI\u201d and \u201cDioionysus\u201d packagesTo determine the optimal hyperparameter values for each ML model, we conduct a five-fold cross validation (CV) using the training set. Specifically, the training set is randomly divided into five folds with a similar number of chains. In each fold, for each combination of the hyperparameters, we find the predicted B-factor values for the left-out training set with the ML model trained by the remaining training set. The optimal hyperparameter set maximizes the out-of-sample PCC between the predicted and actual values across all folds. The optimal hyperparameter values for each ML model can be found in i-th B-factor value, Once the hyperparameter values of the dataset and models have been optimized, the trained models are evaluated using a test set that was non-overlapping with the training set. The PCC between the predicted and actual normalized B-factor values in the test set is calculated for each modelIn this section, we demonstrate the performance of our WPHML model. For both single-element and four-element-combined models, it can be seen that WPHML models are able to consistently outperform the evolution-based method (PSSM) by at least approximately 10% with only the exception of linear regression models (Ridge and LASSO). Among all the models, RF achieves the best result with PCC = 0.5788 (15.1% improvement). Moreover, the performance of the RF model further improves to 0.5822 when the topological features for all four elements were used, which is about 15.8% improvement.The comparison between the results from single-element and four-element-combined models shows that generally there is no significant improvement. In fact, SVM improves only slightly (approximately 0.8%), while XGBoost and ANN models even show some small reduction of accuracy (1.8% and 2.4% respectively). The results seem to be different from previous studies that concluded that element-specific models always deliver better results , 36\u201338. Comparably speaking, RNA structures are more regular and relatively simple. Similar topological features may be embedded in different types of element models. In this way, the additional features do not incorporate new information, instead they will contribute more noises, which causes the drop in performances. Noted that the best test performance of all the models except linear regression using a single element are all based on element P.One of the reasons that larger cut-off distance delivers good results is that our predicted PCC values are predominantly determined by the several larger-sized RNAs. From In this paper, we propose the weighted-persistent-homology-based machine learning (WPHML) models and use them in the RNA B-factor prediction. We found that our WPHML models can consistently deliver a better performance than the evolution-based learning models. In particular, local persistent homology and element-specific persistent homology are considered for topological feature generation. These topological-feature-based random forest models can deliver a PCC up to 0.5822, which is 15.8% increase as compared to the performance of the previous model. Our WPHML models are suitable for any biomolecular-structure-based data analysis. Note that more sophisticated feature engineering of sequence-based information can further improve the accuracy to 0.61 . This agS1 Table(PDF)Click here for additional data file.S2 Table(PDF)Click here for additional data file."} +{"text": "In the original article, there was an error in the title. Instead of \u201cHypernetwork Construction and Feature Fusion Analysis Based on Sparse Group Lasso Method on Functional fMRI Dataset\u201d it should be \u201cHypernetwork Construction and Feature Fusion Analysis Based on Sparse Group Lasso Method on fMRI Dataset\u201d.Additionally, in Equations (4) and (7) the parameters mentioned and its explanations were wrong. \u201cn\u201d should be superscript and not subscript. Also in Equation (4), \u201cx\u201d should not be in italics and \u201c\u03b1\u201d should be in italics. A correction has been made to the following sections:The Materials and Methods section, subsection Construction of Hypernetwork, sub-subsection Sparse Linear Regression Model, paragraph 2:m-th ROI for n-th subject, m-th ROI for n-th subjects, with T being the number of time points in the time series; m-th ROI ; m-th ROI; and m-th ROI.\u201d\u201cThe average time series of The Materials and Methods section, subsection Construction of Hypernetwork, Construction of Hypernetworks Based on the Sparse Group Lasso Method, paragraph 3:\u201cSimilar to the gLasso method, clustering was adopted before creating the hyperedge, and then the sgLasso method was used to construct the hyperedge by solving the sparse linear regression model. The method is represented by the optimization objective function:Gi is a node with tree structure. \u03bb1 and \u03bb2 are regression parameters, with \u03bb1 being used to adjust the sparsity of intra-groups to control the number of non-zero coefficients in non-zero groups, and \u03bb2 being used to adjust group-level sparsity of all three models, when \u03bb2was equal to 0.4. .\u201dThe authors apologize for these errors and state that this does not change the scientific conclusions of the article in any way. The original article has been updated."} +{"text": "As of April 16, 2020, the novel coronavirus disease spread to more than 185 countries/regions with more than 142,000 deaths and more than 2,000,000 confirmed cases. In the bioinformatics area, one of the crucial points is the analysis of the virus nucleotide sequences using approaches such as data stream, digital signal processing, and machine learning techniques and algorithms. However, to make feasible this approach, it is necessary to transform the nucleotide sequences string to numerical values representation. Thus, the dataset provides a chaos game representation (CGR) of SARS-CoV-2 virus nucleotide sequences. The dataset provides the CGR of 100 instances of SARS-CoV-2 virus, 11540 instances of other viruses from the Virus-Host DB dataset, and three instances of Riboviria viruses from NCBI . With this form of the data, it is possible to use data stream, digital signal processing, and machine learning algorithms.\u2022All researchers in bioinformatics, computing science, and computing engineering field can benefit from these data because by using this numeric representation they can apply several techniques such as machine learning and digital signal processing in genomic information.\u2022Data experiments that use clustering and classification techniques in SARS-CoV-2 virus genomic information can be used with this dataset.\u2022These data represent an easy way to evaluate the SARS-CoV-2 virus genome.1This work presents a new dataset of a chaos game representation (CGR) of SARS-CoV-2 virus nucleotide sequences. The dataset contains two kinds of data, the raw data, and the processing data. The raw data is composed of the 100 instances of the SARS-CoV-2 virus genome collected from the National Center for Biotechnology Information (NCBI) The dataset provides two groups of formats files for all data. In the first group, all data are stored in Matlab file format (.mat), and in the second group, part of the data is stored in Microsoft Excel (.xlsx) and another part in the text file (.txt). The two groups have the same information. The data is organized into three main directories: \u201cSARS-CoV-2 data\u201d, \u201cVirus-Host DB data\u201d and \u201cOther viruses data.\u201d Each main directory is formed by two sub-directories: \u201cMatlab\u201d and \u201cExcel and txt.\u201dk-th file is called \u201cRawData_k.mat.\u201dEach sub-directory \u201cMatlab\u201d contains three files called \u201cRawDataTable.mat\u201d, \u201cRawData.mat\u201d and \u201cCGRData.mat\u201d. \u201cRawDataTable.mat\u201d and \u201cRawData.mat\u201d files store the raw data information from the viruses database; they have the same information, however in the \u201cRawDataTable.mat\u201d the attributes are stored in Matlab table format (after 2013b version) and in \u201cRawData.mat\u201d the attributes are stored in Matlab cell arrays format. Each \u201cCGRData.mat\u201d file stores the CGR values of all viruses presented in each \u201cRawDataTable.mat\u201d and \u201cRawData.mat\u201d file. For the main directory \u201cVirus-Host DB data\u201d, the CGR values are stored in 10 files where each COD.txt\u201d where COD is the code (locus name) associated with the virus in Genbank Each sub-directory \u201cExcel and txt\u201d is composed of a file and another sub-directory called \u201cRawData.xlsx\u201d and \u201cCGRData\u201d, respectively. Each \u201cRawData.xlsx\u201d file has the raw data information from the viruses database, and each \u201cCGRData\u201d has the CGR of viruses presented in each \u201cRawData.xlsx\u201d file. The points of the CGR associated with each virus are stored in a text file called \u201cLocusName_2The Chaos Game Representation (CGR), proposed by H. Joel Jeffrey in s, expressed asN is the length of sequence and ns is the n-th nucleotide of the sequence. Each n-th nucleotide, ns, is mapped to bi-dimensional symbol (xs(n), ys(n)) and it can be expressed asn-th symbol (xs(n), ys(n)) is transformed in CGR values by equations expressed asxp(n), yp(n)) from dataset presented in this work.The CGR has with input the nucleotide sequence, ressed ass=[s1,\u22ef,s"} +{"text": "Recently X. Fang et al. using a method given in previous research, where several classes of new MDS self-dual codes were constructed through (extended) generalized Reed-Solomon codes, in this paper, based on the method given in we achieve several classes of MDS self-dual codes.Maximum distance separable (MDS) self-dual codes have useful properties due to their optimality with respect to the Singleton bound and its self-duality. MDS self-dual codes are completely determined by the length A q-ary k-dimensional subspace of Let The study of MDS self-dual codes has attracted a great deal of attention in recent years due to its theoretical and practical importance. The center of the study of MDS codes includes the existence of MDS codes , classifn, the main interest here is to determine the existence and give the construction of q-ary MDS self-dual codes for various lengths. The problem is completely solved for the case where q is even [As the parameters of an MDS self-dual code are completely determined by the code\u2019s length is even . Many MD is even ,12,13,14In , Jin andIn this section we introduce some basic notations of generalized Reed-Solomon codes and extended generalized Reed-Solomon codes. For more details, the reader is referred to .q is a prime power, q elements and let n be a positive integer with Throughout this paper, n distinct elements of n nonzero elements k-dimensional generalized Reed-Solomon code (GRS for short) of length n associated with Let q-ary It is well known that the code .g., see ).q-ary Furthermore, the extended generalized Reed-Solomon code .g., see ).Put Lemma\u00a01.The solution space of the equation system .The soWe defineThe conclusion of the following lemma is straightforward. For completeness, we provide its proof.Lemma\u00a02.Let n be an even number, if there exists .Let n Proof.\u00a0Let This implies that H. Yan observedLemma\u00a03.Let n be an even integer and .Let n Lemma\u00a04.Let .Let m\u2223r is odd prime power, q elements. Suppose m-th root of unity and Let Theorem\u00a01.Let Proof.\u00a0m-th root of unity and Let t distinct elements, such that Let Then the entries of It is known that Let So g be a generator of Let q-ary This implies there exists a Example\u00a01.Let 13,932.Theorem\u00a02.Let Proof.\u00a0t distinct even number Let v is a square element of q-ary Let Example\u00a02.Let Theorem\u00a03.Let Proof.\u00a0t distinct even number Let For any t is odd, so q-ary Since Example\u00a03.Let Theorem\u00a04.Let Proof.\u00a0Let For any Case 1: If m is even, t is odd.Case 2: If m and t are even, A is an even integer. It follows that Case 3: If m is odd, t is even.(1)If (2)If (1)If (2)If \u2003\u25a1We can extend the Theorem 1 to a more general case.Theorem\u00a05.Let Proof.\u00a0m-th root of unity and Let t distinct elements, such that Let Similar with Theorem 1, we gets-th primitive root of unity. So Since So g be a generator of Let Case 1: If m odd and t even, we can take Case 2: If m even and q-ary MDS self-dual code with length So there exists a In this paper, based on the method from , we cons"} +{"text": "R tensor admitting a Q-order Canonical Polyadic Decomposition (CPD) with large factors of size s-divergence, the Bhattacharyya Upper Bound (BUB) -based binary classification problems of interest. Under the alternative hypothesis, i.e., for a non-zero SNR, the observed signals are either a noisy rank- N-dimensional measurement vector N) relied on the exponential rate of s-divergence over parameter s. To circumvent this difficulty, a simplified case of s is often called s-divergence is the so-called Chernoff Evaluating the performance limit for the \u201cGaussian information plus noise\u201d binary classification problem is a challenging research topic, see for instance ,4,5,6,7.ds ratio since anractable ,8. To cin theory ,11,12,13n theory ,15 is asetection , sparse etection , energy etection , multi-ietection ,20, netwetection , angularetection , detectietection , just to problem and oftevergence .P-order tensor is equal to the minimal positive integer, say R, of unit rank tensors that must be summed up for perfect recovery. A unit rank tensor is the outer product of P vectors. In addition, the CPD has remarkable uniqueness properties [R. Unfortunately, unlike the matrix case, the set of tensors with fixed rank is not close [p-th mode. Following the Eckart-Young theorem at each mode level [The tensor decomposition theory is a timely and prominent research topic ,26. Confot close ,30. Thisot close . Note thot close . The TKDot close and the ot close are two de level , this code level or adaptde level , it is sde level . More prThe classification performance of a multilinear tensor following the CPD and TKD can be derived and studied. It is interesting to note that the classification theory for tensors is very under studied. Based on our knowledge on the topic, only the publication tackles R tensor admitting a Q-order CPD with large factors of size More precisely, we consider two cases where the observations are either (1) a noisy rank-s-divergence, the Bhattacharyya Upper Bound (BUB)\u2014Chernoff Information calculated at Under the null hypothesis ibutions ,38,39 foWe note that Random Matrix Theory (RMT) has attracted both mathematicians and physicists since they were first introduced in mathematical statistics by Wishart in 1928 . When Wis-divergence, Chernoff Upper Bound, Fisher Information, etc.), but completes [Finally, let us underline that many arguments of this paper differ from the works presented in ,48. In , we tackThe organization of the paper is as follows: In the second section, we introduce some definitions, tensor models, and the Marchenko-Pastur distribution from random matrix theory. The third section is devoted to present Chernoff Information for binary hypothesis test. The fourth section gives the main results on Fisher Information and the Chernoff bound. The numerical simulation results are given in the fifth section. We conclude our work by giving some perspectives in the In this section, we introduce some useful definitions from tensor algebra and from the spectral theory of large random matrices.Definition\u00a01.The Kronecker product of matrices We have Definition\u00a02.The vectorization where Definition\u00a03.The q-mode product denoted by where Definition\u00a04.The q-mode unfolding matrix of size where R CPD of order Q is defined according toThe rank- product , \u03d5r(q)\u2208Rq-mode product defined in Definition 3 isq-th factor matrix of size An equivalent formulation using the q-mode unfolding matrix for tensor The product .Q is defined according toThe Tucker tensor model of order q-mode product of q-mode unfolding matrix for tensor q-mode unfolding matrix of tensor The Following the definitions, we note that the CPD and TKD scenarios imply that vector M-dimensional vectors for which M remains fixed, matrix The Marchenko-Pastur distribution was introduced half a century ago in 1967,Theorem\u00a01\u00a0The empirical eigenvalue value distribution . The emN large enough, all the eigenvalues are located in a neighbourhood of interval We also observe that Theorem 1 remains valid if ee e.g., ). TheoreWe denote by We note that the estimated hypothesis ce (KLD) . The expce (KLD) . Howeversed-form ,8.The minimal Bayes\u2019 error probability conditionally to vector s-divergence for X. The error exponent, denoted by According to , the relrding to \u2212limN\u2192\u221elAs parameter Finally, using Equations (5) and (7), the Chernoff Upper Bound (CUB) is obtained. Instead of solving Equation (7), the Bhattacharyya Upper Bound (BUB) is calculated by Equation (5) and by fixing Lemma\u00a01.The log-moment generating function given by Equation (6) for test of Equation (4) is given byProof.\u00a0See From now on, to simplify the presentation and the numerical results later on, we denote byRemark\u00a01.The functions s-divergence in the small In the small deviation regime, we assume that Lemma\u00a02.The s-divergence in the small deviation regime can be approximated according towhere the Fisher information is givenProof.\u00a0See s-value at low s-value for larger According to Lemma 2, the optimal Lemma\u00a03.In case of large where Proof.\u00a0See Q-order tensor of size Q-order CPD with a canonical rank of M, we haveWhen tensor Q-order TKD of multilinear rank of When tensor The measurement tensor follows a noisy We recall that in the CPD case, matrix Result\u00a01.In the asymptotic regime where with with Proof.\u00a0See Remark\u00a02.In , the CenIn this section, we assume that Result\u00a02.In the small Proof.\u00a0Using Lemma 2, we can notice thatinstance ).\u2003\u25fbNote that Result\u00a03.In case of large Proof.\u00a0It is straightforward to notice thatThe last equality can be obtained as in . Using LRemark\u00a03.It is interesting to note that for as long as Proof.\u00a0It is straightforward to note thatUsing Equation (14) and condition R is supposed to be small compared to N, it is realistic to assume In the case of low rank CPD where its rank Result\u00a04.Under this regime, the error exponent can be approximated as follows:Proof.\u00a0See It is easy to notice that the second-order derivative of At low Equation s\u22c6\u224821+1SResult 1 and the above approximation allow us to get the best error exponent at low Contrarily, when The two following scenarios can be considered:In the TKD case, we recall that matrix Result\u00a05.In the asymptotic regime where where Proof.\u00a0See Remark\u00a04.We can notice that for where ee e.g., ). As a cResult\u00a06.In case of large Proof.\u00a0We have thatUsing Lemma 3, we get immediately Equation (17).\u2003\u25fbUnder this regime, we have the following resultsResult\u00a07.For small Proof.\u00a0Using Lemma 2, we can notice thatEach term in the product converges a.s towards the second moment of Marchenko-Pastur distributions Remark\u00a05.Contrary to the Remark 3, it is interesting to note that for which does not depend on Q, andwhich depends on Q.In practice, when c is close to 1, we have to carefully check if Q is in the neighbourhood of In this section, we consider cubic tensors of order s-value, L = 10,000 the number of samples for Monte\u2013Carlo process and where Firstly, for the CPD model, in s-divergences: s-divergence obtained by fixing In c goes to zero as predicted by our theoretical analysis.In For the TKD scenario, we follow the same methodology as above for CPD, s-value.For TKD scenario, the mean square relative error is in mean of order We can also notice that the convergence of In this work, we derived and studied the limit performance in terms of minimal Bayes\u2019 error probability for the binary classification of high-dimensional random tensors using both the tools of Information Geometry (IG) and of Random Matrix Theory (RMT). The main results on Chernoff Bounds and Fisher Information are illustrated by Monte\u2013Carlo simulations that corroborated our theoretical analysis.For future work, we would like to study the rate of convergence and the fluctuation of the statistics"} +{"text": "Many-valued -algebras are algebraic systems that generalize Boolean algebras. The MV-algebraic probability theory involves the notions of the state and observable, which abstract the probability measure and the random variable, both considered in the Kolmogorov probability theory. Within the MV-algebraic probability theory, many important theorems have been recently studied and proven. In particular, the counterpart of the Kolmogorov strong law of large numbers (SLLN) for sequences of independent observables has been considered. In this paper, we prove generalized MV-algebraic versions of the SLLN, i.e., counterparts of the Marcinkiewicz\u2013Zygmund and Brunk\u2013Prokhorov SLLN for independent observables, as well as the Korchevsky SLLN, where the independence of observables is not assumed. To this end, we apply the classical probability theory and some measure-theoretic methods. We also analyze examples of applications of the proven theorems. Our results open new directions of development of the MV-algebraic probability theory. They can also be applied to the problem of entropy estimation. MV-algebras, being generalizations of Boolean algebras, were introduced by Chang and usedM is a normalized Carath\u00e9odory defined the basic notions of point-free probability, replacing Kolmogorovian probability measures on e, e.g., ). The MV Mundici as a nor Mundici ) and RieM is weakly Main theorems of the MV-algebraic probability theory, including the basic version of the central limit theorem (CLT), laws of large numbers, and the individual ergodic theorem, can be found in ,8,9. In yniewicz . It is iThe MV-algebraic probability theory was also applied in the Atanassov intuitionistic fuzzy sets and interval-valued fuzzy sets settings , are deWe present three examples of the applications of the Marcinkiewicz\u2013Zygmund, Brunk\u2013Prokhorov, and Korchevsky SLLN for sequences of observables with convergent scaled sums. In particular, independent identically continuously distributed, as well as both independent and dependent discretely-distributed observables are considered.The problem of entropy estimation is important from both theoretical and practical points of view. Classical versions of the law of large numbers are used in this field. In particular, the authors of ,26,27 apThe paper is organized as follows. We present some notations that are used in the paper.We denote by Let X on a probability space X and by X with respect to P (if they exist).For each real-valued random variable The foundations of the theory of MV-algebras can be found in . In thisDefinition\u00a01.An MV-algebra\u2295 is associative and commutative with zero as the neutral element,is an algebra, where M is a non-empty set, the operation and for each:In an MV-algebra, a partial order is defined by the relation:The underlying lattice of M is the distributive latticewith the least element zero and the greatest element one, where the join and the meet are defined as follows:for each.Definition\u00a02.An MV-algebra M is called \u03c3-complete (complete) if every sequence of elements of M has the supremum in M.X and an MV-algebra M, we introduce the notations:for for for For a non-empty set Definition\u00a03.Given a \u03c3-complete MV-algebra M, a functionis called a state on M if it satisfies the following conditions for eachand:(i)\u00a0(ii)\u00a0if, then(iii)\u00a0if, thenA state m on M is called faithful iffor each non-zero element x of M.Definition\u00a04.A probability MV-algebra is a pairconsisting of a \u03c3-complete MV-algebra M and a faithful state m on M.M is complete (see Theorem 13.8 in [If 13.8 in ).Definition\u00a05.Given a \u03c3-complete MV-algebra M, a functionis called an n-dimensional observable in M if it satisfies the following conditions:(i)\u00a0(ii)\u00a0andfor eachsuch that(iii)\u00a0for eachTheorem\u00a01.Given M a \u03c3-complete MV-algebra, an n-dimensional observable, and a state m on M, the functiondescribed by:is a probability measure on.The proof of the above theorem can be found in .Definition\u00a06.Letbe a probability MV-algebra. We call an observableintegrable inif the expectationexists. Moreover, we writeforifwhereis the absolutemoment of x. If, then the variance of x is given by the formula.Definition\u00a07.Observablesin a probability MV-algebraare said to be independent (with respect to m) if there exists an n-dimensional observable such that for arbitrary:Remark\u00a01.Letbe independent observables in a probability MV-algebraandbe their joint observable. Then, for arbitrary Borel measurable function, the mapping given by:is an observable.Convergence almost everywhere of observables in a probability MV-algebra was defined by Rie\u010dan and Mundici .Definition\u00a08.A sequenceof observables in a probability MV-algebrais said to converge to zero m-almost everywhere (m-a.e.), if:In the further part of this section, we will use the Kolmogorov probability space of observables considered by Rie\u010dan and Mundici .Let Kolmogorov probability space of the observablesWe call the triplet For each Proposition\u00a01.Letbe a probability MV-algebra andbe a sequence of independent observables in, withbeing the joint observable of. Letbe the Kolmogorov probability space of the observablesin. For eachletbe an arbitrary Borel function. Let further the observablebe defined byand the random variablebe described by. Then:and the convergence ofto zero P-a.s.implies the convergence ofto zero m-a.e.In this section, we formulate and prove MV-algebraic counterparts of the Marcinkiewicz\u2013Zygmund and Brunk\u2013Prokhorov SLLN see .In the following part of the paper, in formulas containing integrals, we will assume that image measureLet 3.2 from ), which Lemma\u00a01.Letbe a probability MV-algebra and. Then, for any-valued Borel functionand observablesin, the expected valueexists if and only if:Furthermore, if the above condition is satisfied, then:Proof.\u00a0Let 16.12 in ):(5)\u222bXTn as follows.The MV-algebraic version of the Marcinkiewicz\u2013Zygmund SLLN concerns the case of independent observables belonging to Theorem\u00a02.Given a probability MV-algebra, letbe an independent sequence of observables inhaving the same distribution. Let,,forandfor. Then:converges to zero m-a.e.Proof.\u00a0m-a.e.\u2003\u25a1Let The following MV-algebraic version of the Brunk\u2013Prokhorov SLLN concerns sequences of observables that are not necessarily identically distributed.Theorem\u00a03.Let. Given a probability MV-algebra, letbe an independent sequence of observables insuch that,for each. Let:Then:Proof.\u00a0We denote by fore, by and TheoIn this subsection, we consider the convergence of observables in a probability MV-algebra with product Remark\u00a02.Letbe observables in a probability MV-algebra with product. By Proposition 2.4 from [such that for arbitrary:called the joint observable of. Moreover, for arbitrary Borel measurable function, the formula:defines a one-dimensional observable.Kolmogorov probability space of the observablesLet given by . We callThe following proposition is a consequence of Theorem 3.17 and Proposition 3.16 from .Proposition\u00a02.Letbe a probability MV-algebra with product andbe a sequence of observables in, withbeing the joint observable of. Letbe the Kolmogorov probability space of the observablesin. For eachletbe an arbitrary Borel function. Let further the observablebe defined byand the random variablebe described by. Then, the convergence ofto zero P-a.s. implies the convergence ofto zero m-a.e.We formulate and prove the main theorem of this subsection. For its classical counterpart and some notations, we refer the reader to Theorem\u00a04.Let observablesin a probability MV-algebra with productbe non-negative, i.e.,and their absolute moments of some orderbe finite. Letbe a non-decreasing unbounded sequence of positive numbers. If, where:and:where \u03c8 is a function belonging to, then:Proof.\u00a0Let We also present two special cases of the above theorem, corresponding to Theorem 2 and 3, formulated by Korchevsky within tTheorem\u00a05.Let observablesin a probability MV-algebra with productbe non-negative and have finite variances. Letbe a non-decreasing unbounded sequence of positive numbers. Ifand:for some function, then:Theorem\u00a06.Let observablesin a probability MV-algebra with productbe non-negative and have finite absolute moments of some order. Letbe a sequence of positive numbers,,and:for some function. Then:Theorem 5 follows from Theorem 4 applied for We analyze the asymptotic behavior of scaled sums of three sequences of observables. They are independent identically continuously distributed in the first sequence and independent identically discretely distributed in the second one, whereas the third example concerns non-negative dependent observables. The MV-algebraic version of the Kolmogorov SLLN, presented in , cannot m of the form We consider observables taking values in the probability MV-algebra Let Let for each ented in , does noLet n, the joint observable We fix S-measurable functions full tribe, equipped with the operations c on We consider a probability MV-algebra with the product, presented in . Let \u03a9,Suct (see ).Let This paper is devoted to the development of MV-algebraic probability theory. We formulate and prove three generalized versions of the strong law of large numbers. The first two versions of the strong law, i.e., the MV-algebraic Marcinkiewicz\u2013Zygmund SLLN and Brunk\u2013Prokhorov SLLN, describe the asymptotic behavior of the sums of independent observables, whereas the third one, i.e., the Korchevsky SLLN, concerns the case of dependent observables. Their proofs require an application of the Kolmogorov theory of probability and some measure-theoretic techniques. To illustrate our theoretical results, we also present and analyze some examples of sequences of observables in a probability MV-algebra. We believe that our results open new possibilities for further development of the MV-algebraic probability theory in the non-Kolmogorovian setting. In particular, they can be used for the future development of the theory of fuzzy, intuitionistic fuzzy, and interval-valued fuzzy random events in complex spaces. We would like to apply the proven theorems to the estimation of logical entropy, as well as other types of entropy in the case of intuitionistic fuzzy random events. This requires, among other things, the definition of entropy for observables in MV-algebras."} +{"text": "Scientific Reports 10.1038/s41598-020-57777-2, published online 22 January 2020Correction to: The original version of this Article contained an error in the title of the paper, where the word \u201cspectrophotometry\u201d was incorrectly given as \u201c|spectrophotometry\u201d. This has now been corrected in the PDF and HTML versions of the Article."} +{"text": "Nature Communications 10.1038/s41467-020-16154-3, published online 8 May 2020.Correction to: Nucleic Acids Res.44, 6994\u20137005 (2016), when describing construction of the blue-light activation tool (BLAT). The sentence \u201cThe construction of BLAT and BLRT follows a previously reported strategy38\u201d has been added to the beginning of the second paragraph of the \u201cResults\u201d subsection \u201cConstructing the basic tools for powering cell division\u201d. Reference 38 has also been added to the sentence following: \u201cTo construct a BLAT, two units were designed and assembled : a blue optogenetics unit (BOU) to express light-sensitive protein EL222, and an activation reporter unit to replace the LuxR-binding region with an EL222-binding region by overlapping the -35 region (E. coli consensus -35 and -10 regions) of the LuxI promoter38.\u201dThe original version of this Article omitted a reference to previous work in reference 38, Jayaraman et al. Blue light-mediated transcriptional activation and repression of gene expression in bacteria. 38\u201d has been added to the legend of Fig. 4 to clarify this.Furthermore, Fig. 4, panels a and b, are adapted from reference 38 Figure 1, panels a and c, and this was not clear in the figure legend. The sentence \u201cPanels a and b are adapted from Jayaraman et al.These have been corrected in both the PDF and HTML versions of the Article."} +{"text": "This approach increases exponentially, in the number of variables, the search space of the state-of-the-art tDBN algorithm. Concerning worst-case time complexity, given a Markov lag m, a set of n random variables ranging over r values, and a set of observations of N individuals over T time steps, the bcDBN algorithm is linear in N, T and m; polynomial in n and r; and exponential in p and k. We assess the bcDBN algorithm on simulated data against tDBN, revealing that it performs well throughout different experiments.Dynamic Bayesian networks (DBN) are powerful probabilistic representations that model stochastic processes. They consist of a prior network, representing the distribution over the initial variables, and a set of transition networks, representing the transition distribution between variables over time. It was shown that learning complex transition networks, considering both intra- and inter-slice connections, is NP-hard. Therefore, the community has searched for the largest subclass of DBNs for which there is an efficient learning algorithm. We introduce a new polynomial-time algorithm for learning optimal DBNs consistent with a breadth-first search (BFS) order, named bcDBN. The proposed algorithm considers the set of networks such that each transition network has a bounded in-degree, allowing for Bayesian networks (BN) represent, in an efficient and accurate way, the joint probability of a set of random variables . Dynamicmth-order Markov DBNs such that each transition network has inter- and intra-slice connections. More recently, a polynomial-time algorithm was proposed that learns both the inter- and intra-slice connections in a transition network [The problem of learning a BN given data consists in finding the network that best fits the data. In\u00a0a score-based approach, a scoring criterion is considered, which measured how well the network fits the data ,7,8,9,10 network . The seaBy looking into lower-bound complexity results for learning BNs, it is known that learning tree-like structures is polynomial . HoweverIn this paper, we propose a generalization of the tDBN algorithm, considering DBNs such that each transition network is consistent with the order induced by the BFS order of the optimal branching of the tDBN network, that we call bcDBN. Furthermore, we prove that the search space increases exponentially, in the number of attributes, comparing with the tDBN algorithm, while running in polynomial time.We start by reviewing the basic concepts of Bayesian networks, dynamic Bayesian networks and their learning algorithms. Then, we present the proposed algorithm and the experimental results. The\u00a0paper concludes with a brief discussion and directions for future work.X denote a discrete random variable that takes values over a finite set n-dimensional random vector, where each n random variables Let , let X=X,\u2026,Xn repDefinition\u00a01 (Bayesian Network).A n-dimensional Bayesian Network (BN) is a triple where B induces a unique joint probability distribution over A BN D of size N, where variable D where the set of parents Let Observe that,Intuitively, the graph of a BN can be viewed as a network structure that provides the skeleton for representing the joint probability compactly in a factorized way, and making inferences in the probabilistic graphical model provides the mechanism for gluing all these components back together in a probabilistic coherent manner .M\u2014Maintenance problems; S\u2014Severe weather; F\u2014Flight delay; O\u2014Overnight accommodation; and C\u2014Cash compensation. In this simple example, all variables are Bernoulli (ranging over T and F). Inside the callouts only the CPTs for variables taking the value T are given.An example of a BN is depicted in G is given, and our goal is to estimate the set of parameters of the network G, given\u00a0only the training data. We assume data is complete, i.e., each instance is fully observed, there are no missing values nor hidden variables, and the training set N i.i.d. instances. Using general results of the maximum likelihood estimate in a multinomial distribution we get the following estimate for the parameters of a BN B:Learning a Bayesian network is two-fold: parameter learning and structure learning. When learning the parameters, we assume the underlying graph B fits the data D and Minimum Description Length (MDL) . InformaThe LL is given by:This criterion favours complete network structures, and does not generalize well, leading to the overfitting of the model to the training data. The MDL criterion, proposed by Rissanen , imposesThe penalty introduced by MDL creates a trade off between fitness and model complexity, providing a model selection criterion robust to overfitting.n random variables by The structure learning reduces to an optimization problem: given a scoring function, a data set, a\u00a0search space and a search procedure, find the network that maximizes this score. Denote the set of BNs with Definition\u00a02 (Learning a Bayesian Network).Given a data n nodes has a superexponential number of structures, k-graphs (CkG) [k-graphs (BCkG) [kG and BCkG graphs are exponentially larger, in the number of variables, when compared with branchings [The space of all Bayesian networks with problem ,21,22. Hructures ,23 or toructures , it is phs (CkG) and breas (BCkG) network anchings ,17.Definition\u00a03 (k-graph).A k-graph is a graph where each node has in-degree at most k.Definition\u00a04 (Consistent k-graph).Given a branching R over a set of nodes V, a graph Definition\u00a05 (BFS-consistent k-graph).Given a branching R over a set of nodes V, a graph k-graph, there can only exist an edge from R. We assume that if R reaches Observe that the order induced by the optimal branching might be partial, while its BFS order is always total (and refines it). Given a BFS-consistent t. Furthermore, let T sequential instants of time. The set of observations is represented as n attributes, measured at time t and referring to individual h.Dynamic Bayesian networks (DBN) model the stochastic evolution of a set of random variables over time . Considet, In the setting of DBNs the goal is to define a probability joint distribution over all possible trajectories, i.e., possible values for each attribute t, the probability distribution is considered constant, i.e., Observations are viewed as i.i.d. samples of a sequence of probability distributions Definition\u00a06 (mth-Order Markov assumption).A stochastic process over In this case m is called the Markov lag of the process.mth-order Markov process.If all conditional probabilities in Equation\u00a0 are invaDefinition\u00a07 (First-order Markov DBN).A non-stationary first-order Markov DBN consists of:A prior network A set of transition networks We denote by p parents from the previous time slices.Learning DBNs, considering no hidden variables or missing values, i.e., considering a fully observable process, reduces simply to applying the methods described for BNs for each transition of time . SeveralDefinition\u00a08 (Tree-augmented DBN).A dynamic Bayesian network is called tree-augmented (tDBN) if for each transition network k and is consistent with the BFS order of the tDBN; the inter-slice network has in-degree of at most p. The main idea of this approach is to add dependencies that were lost due to the tree-augmented restriction of the tDBN and, furthermore, remove irrelevant ones that might be present because a connected graph was imposed. Moreover, we also consider the BFS order of the intra-slice network as an heuristic for a causality order between variables. We make this concept rigorous with the following definition.We introduce a polynomial-time algorithm for learning DBNs such that: the intra-slice network has in-degree at most Definition\u00a09 (BFS-consistent k-graph DBN).A dynamic Bayesian network is called BFS-consistent k-graph (bcDBN) if for each intra-slice network Given an optimal branching Moreover, each node Before we present the learning algorithm, we need to introduce some notation, namely, the\u00a0concept of ancestors of a node.Definition\u00a010 (Ancestors of a node).The ancestors of a node mth-order bcDBN. Let p. For each node We will now describe briefly the proposed algorithm for learning a transition network of a p parents from past time slices, and the one in Equation (p parents from the past time slices when We note that the set quations and 9) Xps[t\u2212m+quations refers tk. Note that, if\u00a0A complete directed graph is built such that each edge network . In ordeD, a Markov lag m, a decomposable scoring function p and a maximum number of intra-slice parents k.The pseudo-code of the proposed algorithm is given in Algorithm 1. As parameters, the algorithm needs: a dataset Algorithm 1 Learning optimal mth-order Markov bcDBN1:for each transition do2:\u2003Build a complete directed graph in 3:\u2003Weight all edges Equation in the intra-slice network the graph is consistent with a total order (so no cycle can occur); and (ii) in the inter-slice network there are only dependencies from previous time slices to the present one (and not on the other way).\u2003\u2610Let Algorithm 2 Compute all the weights 1:for all nodes do2:\u2003Let 3:fordo\u20034:ifthen\u2003\u20035:\u2003\u2003\u2003Let 6:end if\u2003\u20037:end for\u20038:for all nodes do\u20039:\u2003\u2003Let 10:fordo\u2003\u200311:ifthen\u2003\u2003\u200312:\u2003\u2003\u2003\u2003Let 13:end if\u2003\u2003\u200314:end for\u2003\u200315:end for\u200316:\u2003Let 17:end forAlgorithm 3 Compute the set of parents of 1:Let 2:fordo3:fordo\u20034:ifthen\u2003\u20035:\u2003\u2003\u2003Let max = 6:\u2003\u2003\u2003Let the parents of 7:end if\u2003\u20038:end for\u20039:end forTheorem\u00a02.Algorithm 1 takes timegiven a decomposable scoring function \u03d5, a Markov lag m, a set of n random variables, a bounded in-degree of each intra-slice transition network of k, a bounded in-degree of each inter-slice transition network of p and a set of observations of N individuals over T time steps.Proof.\u00a0p elements is given by:For each time transition r, and that each variable has at most p from the past and 1 in p elements from the past and k elements from the present is upper bounded by Calculating the score of each parent set (Step 11 of Algorithm 2), considering that the maximum number of states a variable may take is Theorem\u00a03.There are at least Proof.\u00a0i-th node of V with at most k lower nodes. When V with at most k lower nodes exist. Therefore, there are at leastk-graphs.Consider without loss of generality the time transition k-graphs.Let There are https://margaridanarsousa.github.io/learn_cDBN/). The experiments were run on an Intel\u00ae Core\u2122 i5-3320M CPU @ 2.60GHz\u00d74 machine.We assess the merits of the proposed algorithm comparing it with one state-of-the-art DBN learning algorithm, tDBN . Our algN. The parameters p and k were taken to be the maximum in-degree of the inter and intra-slice network, respectively, of the transition network considered.We analyze the performance of the proposed algorithm for synthetic data generated from stationary first-order Markov bcDBNs. Five bcDBN structures were determined, parameters were generated arbitrarily, and observations were sampled from the networks, for a given number of observations one intra-slice complete bcDBN network with me slice a;one incomplete bcDBN network, such that each node in nd p+k\u22644 b;two incomplete intra-slice bcDBN network , with higher precision scores and similar recall, comparing with tDBN+MDL.For both algorithms, in general, the LL gives raise to better results, when considering a complete network structure and a lower number of instances, whereas taking an incomplete network structure and a higher number of instances, the MDL outperforms LL. The complexity penalization term of MDL prevents the algorithms of choosing false positive edges and gives raise to higher precision scores. The LL selects more complex structures, such that each node has exactly N. In order to understand the number of instances N needed to fully recover the initial transition network, we designed two new experiments where five samples where generated from the first-order Markov transition networks depicted in We stress that in all settings considered both algorithms improve their performance when increasing the number of observations k, the number of necessary observations to totally recover the initial structure increases significantly.The number of observations needed for the bcDBN+MDL to recover the aforementioned networks are 0\u00b1478.18 a and 290\u00b11134.77 b, with ak-structures, the bcDBN algorithm achieved consistently significantly higher When considering more complex BFS-consistent The bcDBN learning algorithm has polynomial-time complexity with respect to the number of attributes and can be applied to stationary and non-stationary Markov processes. The proposed algorithm increases the search space exponentially, in the number of attributes, comparing with the state-of-the-art tDBN algorithm. When considering more complex structures, the bcDBN is a good alternative to the tDBN. Although a higher number of observations are necessary to fully recover the transition network structure, bcDBN is able to recover a significantly larger number of dependencies and surpasses, in all experiments, the tDBN algorithm in terms of A possible line of future research is to consider hidden variables and incorporate a structural Expectation-Maximization procedure in order to generalize hidden Markov models. Another possible path to follow is to consider mixtures of bcDBNs, both for classification and clustering."} +{"text": "The frequency distribution of the h-index rank follows an exponential decay curve with the following formula y\u2009=\u2009152.94e\u22120.0312x, where x is the rank of the entry and y is the h-index, and R2\u00a0=\u00a00.9812 . The remaining 141 countries have an h-index\u2009>\u20092, 78 of those have an h-index\u2009>\u200911, just 53 of them have an h-index\u2009>\u200922 and 36 have an h-index\u2009>\u200944 , with the exception of Iran . We have also included three other entries in this list, namely Argentina , Estonia and Hungary on the basis of their h-index performance (h-index ranks of those can be examined in comparison to a group of 30 countries that produce >98% of the world\u2019s highly cited (top 1%) papers (h-index per million inhabitants) , climbing to 0.964 if Greece, Iran, Italy and Russia are excluded is missing , raising questions about barriers to entry, and despite a wealth of opportunities for international collaboration, that will need to be addressed in the future.To examine whether the use of the formance . Interesformance . The h-ied here) and two bitants) . These 4milarity . The ran missing . Dispari missing . It is wAs the field of bioinformatics has expanded across all of the life sciences , the prebtaa132_Supplementary_DataClick here for additional data file."} +{"text": "Moreover, we provide a polynomial-time algorithm to identify all u-fp orthology assignments in a BMG. Simulations show that at least not on any a priori knowledge about underlying gene or species trees.Genome-scale orthology assignments are usually based on reciprocal best matches. In the absence of horizontal gene transfer (HGT), every pair of orthologs forms a reciprocal best match. Incorrect orthology assignments therefore are always false positives in the reciprocal best match graph. We consider duplication/loss scenarios and characterize unambiguous false-positive ( Orthology is one of the key concepts in evolutionary biology: Two genes are orthologs if their last common ancestor was a speciation event Fitch . Distingevolutionary scenario comprising a gene tree T and a species tree S together with a reconciliation mapT to S. The map In this contribution, we consider exclusively duplication/loss scenarios, i.e., we explicitly exclude horizontal gene transfer. Characterizations of reconciliation maps are given e.g. in G\u00f3recki and Tiuryn inaccuracies in the assignment of best matches from sequence similarity data Stadler et\u00a0al. , and(ii)limits in the reconstruction of the \u201ctrue\u201d orthology relation from best match graphs Gei\u00df et\u00a0al. .We consider best matches as an evolutionary concept: A gene y in species s is a best match of a gene x from species s contains no gene x. That is, best matches capture the idea of phylogenetically most closely related genes. Maybe surprisingly, the combinatorial structure of best matches has become a focus only very recently Gei\u00df et\u00a0al. and in the BMG. Here, we aim to identify false-positive edges from the structure of the BMG itself.Of course, the \u00df et\u00a0al. showed t\u00df et\u00a0al. below. Oxy is a reciprocal best match. If there are no other descendants that harbor genes witnessing the duplication event, then the framework of best matches provides no information to recognize xy as a false-positive assignment.We first note that false positives cannot be avoided altogether, i.e., not all false positives can be identified from a BMG alone. The simplest example, Fig.\u00a0On the other hand, RBMGs and thus BMGs contain at least some information on false positives. Since the orthology relation forms a cograph but RBMGs are not cographs in general Gei\u00df et\u00a0al. , incorrez et\u00a0al. noted thz et\u00a0al. . The remz et\u00a0al. . Here, wz et\u00a0al. to a comGood quartets cannot be defined on RBMGs because information on non-reciprocal best matches is also needed explicitly. This suggests to consider BMGs rather than RBMGs as the first step in graph-based orthology detection methods. In practice, best matches are approximated by sequence similarity and thus are subject to noise and biases Stadler et\u00a0al. . The empunambiguous false-positive (u-fp) edges, i.e., reciprocal best matches that cannot be orthologs w.r.t. to any gene tree explaining the BMG. Sec.\u00a0We provide a full characterization of unambiguous false-positive orthology assignments in BMGs.We provide a polynomial-time algorithm to determine all unambiguous false-positive orthology assignments in BMGs.In Sec.\u00a0Sec.\u00a0Since the material is extensive and very technical, we subdivide our presentation into a main narrative part Secs.\u00a0\u20136 and a xy is an edge in undirected. A graph subgraph of symmetric part of a directed graph induced, denoted by vertex-colored graphs M of colors. A vertex-coloring is called proper if x,\u00a0y) in We consider finite, directed graphs path (of length) in a directed graph G is a subgraph induced by a nonempty sequence of pairwise distinct vertices A treesundirected, planted and phylogenetic, that is, they satisfy (i) the root L(T) for the leaves the subtree of T rooted in v. The set of clusters of a tree T is TSemple and Steel best matches of the same color clusters of orthologous groups (COGs) concept if there exists a leaf-colored tree explainsreciprocal best match graph (RBMG) if it is the symmetric part of a BMG An arbitrary vertex-colored graph x and y are reciprocal best matches in For the symmetric part of the BMG \u00df et\u00a0al. .evolutionary scenario extends the map L(T) only contains extant genes. Inner vertices in the gene tree T that designate speciations have their correspondence in inner vertices of the species tree. In contrast, gene duplications occur independently of speciations and thus belong to edges of the species tree. The embedding of T into S is formalized byAn (Reconciliation Map) Let S is a map (R0)Root Constraint.(R1)Leaf Constraint. If (R2)Ancestor Preservation. If (R3)Speciation Constraints. Suppose (i)x in T.(ii)S for any two distinct children x in T.Several alternative definitions of reconciliation maps for duplication/loss scenarios have been proposed in the literature, many of which have been shown to be equivalent. This type of reconciliation map has been established in Gei\u00df et\u00a0al. . MoreoveS with leaf set is the map Given a reconciliation map The following result is a simple but useful consequence of combining the axioms of the reconciliation map with the event labeling of Def.\u00a0S and suppose that endowed with a map Lemma\u00a0\u00df et\u00a0al. ,\u00a0Lemma\u00a02extremal event labeling of T is the map Let T and a reconciliation of T with a species tree S, a duplication vertex of T is an AD vertex if its two subtrees have at least one color in common. In contrast, it is a non-apparent duplication (NAD) vertex if the color sets of its subtrees are disjoint. This notion is useful for a variety of parsimony problems that usually aim to avoid or minimize the number of NAD vertices Swenson et\u00a0al. has vertex set L(T) and edges The The orthology graph is often referred to as the orthology relation. Orthology graphs coincide with a well-known graph class:G is an orthology graph for some event-labeled tree , i.e. G is a cograph. for any given reconciliation map connecting a gene with a species tree.S. Then Let xy in false-positive, or fp for short, if for every reconciliation map S we have xy is called fp whenever x and y cannot be orthologous w.r.t. any possible reconciliation fp s can be identified without considering reconciliation maps explicitly.In other words, xy be an edge in xy is fp.The edge There are two children For the extremal labeling Let fp can be verified in polynomial time for any given gene tree fp edges coincide with the edges xy in Lemma\u00a0entclass1pt{minimaxz, yz are fp but not fp. Since we assume that no information on a priori, it is natural to consider the set of edges that are false positives for all trees explaining a given BMG.As shown in Fig.\u00a0 Let xy in unambiguous false-positive (u-fp) if for all trees xy is fp.xy in u-fp, then it is in particular fp in the true history that explains u-fp edges are always correctly identified as false positives. Not all \u201ccorrect\u201d false-positive edges are u-fp, however. It is possible that, for an edge xy in xy is not fp for some gene tree xy is not fp and therefore not u-fp.Hence, if an edge entclass1pt{minimaentclass1pt{minimax and y for which T rooted in the children of u-fp edges in a BMG.In order to adapt the concept of AD vertices for our purposes, we introduce the color-intersection mentclass2pt{minimxy in a BMG fp for every tree u-fp.Every edge As we shall see below, the converse of Prop.\u00a0xy in xy is fp.The edge xy is u-fp.The edge Let Prop.\u00a0ab, bc, and cd.Since every orthology graph is a cograph Thm.\u00a0 and thus Let L, and let Q, resp., the induced subgraph good quartet if (i) a bad quartet if (i) a ugly quartet if an The edge xy in a good quartet middle edge. The edge zx of an ugly quartet first edge. First edges in ugly quartets are uniquely determined due to the colors. In bad quartets, this is not the case and therefore, the edge r, s, and t must be pairwise distinct. Note that (R)BMGs may also contain induced The three different types of quartets are shown in Fig.\u00a0mentclasspt{minimaz and u-fp edges. We collect here the main results of Sec.\u00a0Good quartets are characteristic of a complementary gene loss yz is u-fp.If (ii)xy and its middle edge yz are u-fp.If (iii)xy and zw are u-fp.If Let Not surprisingly, quartets are intimately linked to color-intersections:xy. Then, xy is either the middle edge of some good quartet or the first edge of some ugly quartet, which in turn implies that xy is u-fp.Let u-fp edges xy with xy must be u-fp since u-fp. Moreover, xy, on the other hand, is the first edge of the ugly quartet All mentclass2pt{minimxy is the middle edge of a good quartet, then u-fp edges that are not identified with the help of the color-intersection u-fp edges. The edge xy in the BMG shown in Fig.\u00a0u-fp, but it is not contained in a good, bad, or ugly quartet.Furthermore, if an edge u-fp edges that are not identified by quartets, we first introduce an additional motif that may occur in vertex-colored graphs.In order to characterize the (Hourglass) An hourglass in a proper vertex-colored graph xy and Note that Condition (i) rules out arcs between Every hourglass is a BMG since it can be explained by a tree as shown in Fig.\u00a0Hourglasses are not necessarily part of an induced xy with Hourglasses xy is u-fp.If a BMG u-fp edges that are not contained in a quartet, see Fig.\u00a0Prop.\u00a0(Hourglass chain) An hourglass chainA vertex z is called a left tail of the hourglass chain tailed if it has a left or right tail.u which is common to all hourglasses in the chain Let augmented tree ofIn particular, the augmented tree preserves the best match relation:For every leaf-colored tree We now have everything in place to present the main results of this section.Let u-fp edges \u00a0On\u00a0S, we use a variant of the well-known constant-rate birth-death process with a given age \u00a0A planted species tree nd Klemm , which mbona fide multifurcations of species Kliman et\u00a0al. . Multifurcating gene trees were produced for p and T, the true BMG G are known, we can also determine the setu-fp edges as well as the subsets u-fp edges that are middle edges of a good or first edges of an ugly quartet, respectively. Note that in general we have u-fp edges in good and ugly quartets increases moderately for larger p.The simulated data set of evolutionary scenarios comprises species trees with 10 to 30 species (drawn uniformly). The time difference between the planted root and the leaves of u-fp edges in Fig.\u00a0u-fp edges that appear only as first edges of bad quartets is even smaller; only 2-3% of the u-fp edges associated with hourglass chains, i.e., less than 0.15% of all u-fp edges are of this type. The overwhelming majority of u-fp edges associated with quartets thus appear as middle edges of good quartets. This observation provides an explanation for the excellent performance of removing the u-fp edges, which are completely covered by First, we note that, consistent with Gei\u00df et\u00a0al. , Stadler\u00df et\u00a0al. . In part\u00df et\u00a0al. , there iu-fp edges in terms of underlying subgraphs and refinements of trees. Since the best match graph contains only false positives, we have obtained a characterization of all unambiguously incorrect orthology assignments. Simulations showed that the majority of false positives comprises middle edges of good quartets, while u-fp edges that appear only as first edges of an ugly quartet are rare. Not surprisingly, the hourglass-related u-fp edges become important in gene trees with many multifurcations. They do not appear in scenarios derived from binary gene trees. For the theory developed here, it makes no difference whether polytomies in the gene tree appear as genuine features, or whether limited accuracy of the approximation from underlying sequence data produced the equivalent of a soft polytomy in the BMG.We have shown here how all unambiguously false-positive orthology assignments can be identified in polynomial time provided that all best matches are known. In particular, we have provided several characterizations for The augmented tree d et\u00a0al. , i.e., dd et\u00a0al. , Lafond d et\u00a0al. with besT and species tree S, we ask here about the information contained in the BMG implies a set S Hernandez-Rosales et\u00a0al. hits into evolutionary best matches in a systematic manner Stadler et\u00a0al. , estimatclusters of orthologous groups (COGs) in an empirically estimated RBMG Tatusov et\u00a0al. (u-fp edges and thus, it is the closest orthology graph. However, Orthology prediction tools intended for large data sets often do not attempt to infer the orthology graph, but instead are content with summarizing the information as v et\u00a0al. , Roth etv et\u00a0al. . Formall"} +{"text": "Simultaneous search for one of two targets is slower and less accurate than search for a single target. Within the Signal Detection Theoretic (SDT) framework, this can be attributed to the division of resources during the comparison of visual input against independently cued targets. The current study used one or two cues to elicit single- and dual-target searches for orientation targets among similar and dissimilar distractors. In Experiment The online version of this article (10.3758/s13414-019-01854-w) contains supplementary material, which is available to authorized users. Finding a target-object in the scene requires observers to compare visual input at different locations with an internal representation of the target\u2019s features. This ability is thought to rely on components of selective attention that integrate information across frontal, parietal and visual cortical regions Ptak, . Prior tMost studies of search have focussed on the accuracy and speed of detection for a single target. In this situation, selection elicits a topographic map that represents objects in terms of their similarity to the target\u2019s features . Evidence to suggest independently cued features can also mediate perceptual categorisation was reported by Roper and Vecera . They usThe results above suggest the selection of visual input for independently cued targets generalises from electrophysiological to behavioural responses during search. Evidence that MITs can also guide oculomotor behaviour has been reported by Beck and colleagues . All reported normal or corrected-to-normal visual acuity. Recruitment, consent and all experimental procedures conformed to American Psychological Association (APA) ethics standards.We used a small-The experiment was run on an IBM PC with a 19-in. CRT View Sonic G90fB monitor . The display resolution was 1,240 \u00d7 768 pixels and the frame rate was 85 Hz. Stimulus presentation and data collection were controlled using custom-built software in MATLAB with Psychophysics toolbox extensions Landholt\u2019s C-shapes and grey annuli that subtended 3.0\u00b0 \u00d7 3.0\u00b0. Stimuli were presented at 12 equally spaced locations on the circumference of a virtual circle with a radius of 7.0\u00b0. C-shapes and annuli were presented on a uniform black background.Displays contained red (C-shapes). Experimental blocks contained four repetitions of this structure . On each block of trials, two target and four distractor orientation values were assigned. Target orientations were sampled from 90\u00b0 and 270\u00b0 \u00b1 15\u00b0 to 30\u00b0. Distractor orientations for each observer differed from Target-Left or Target-Right by an angle of rotation that produced 80% accuracy on a pre-test (see below). Target and distractor values were fixed within experimental blocks to produce symmetric target-distractor similarity distributions for two targets among numerically equivalent subsets of leftward and rightward Cs. ; trial type (target-present or -absent); target identity (Target-Left or -Right), and set size . At set size two, one leftward and one rightward distractor were randomly sampled from the four alternatives. At set size one, a single leftward or rightward distractor was sampled with equal probability. On target-present trials, displays contained one C-shape at the orientation assigned to Target-Left or Target-Right. Targets always replaced a similar distractor from the same (leftward or rightward) group and displays always contained one C-shape from a group that was cued. Search displays were presented for 94 ms and followed by blank screen, which remained visible until a response was recorded. Short-duration displays were used to equate processing time and prevent eye movements on single- and dual-target searches.Figure C-shape in the same displays as the experimental session but varied the orientations of the target and distractor across the range \u00b1 5\u00b0 to 30\u00b0 using a method of constant stimulus. Individual responses were fitted with a cumulative Gaussian function to estimate the angle of rotation required by each observer to distinguish targets from distractors on 80% of trials.Observers completed ten blocks of trials in a single experimental session. In each block, single- and dual-target cues were equally likely for leftward and rightward targets. A cued target appeared in the search display on 50% of trials, and the order of presentation for each target, search type and trial was randomly assigned for each block. Prior to experimental sessions, observers completed two pre-test blocks of 80 trials. These presented a single i to the visual response to each object in the display j. The result of each comparison describes the perceived similarity between the cue and the object, which is represented by a real number sij. To classify the display, observers evaluate whether the maximum similarity value is less than or equal to a response criterion. The cumulative probability distribution function P(sij \u2264R) ;of sij depends on three factors: (1) The type of comparison C, which can be a target(t) similar distractor(s) or dissimilar distractor(d), depending on the relationship between the cue and the object in the display. (2) The set size D, which can equal 1, 2 or 4. (3) The number T of templates used to guide search, which is determined by the number of cues. These dependencies are indicated by the notation P. As a simplifying assumption, all comparisons between cues and objects with opposite orientations are modelled by the same probability distribution. In particular, the target is considered a dissimilar distractor for the second cue on dual-target searches. In agreement with standard SDT and variance 1, i.e., P = \u03a6 ), where \u03a6 denotes the standard normal cumulative probability distribution . The statistical modelling of this process is based on signal detection theory (SDT) and the assumption of independent comparisons . Because of independence, the probability that none of the comparisons exceed the response criterion equals the product of the probabilities for each individual comparison being less than \u03bb:To distinguish between theoretical accounts of attentional control during search, we introduce additional assumptions to derive two types of SDT model. Multiple-item template (MIT) models simulate display classification based on the evaluation of similarity values for independently cued feature values during dual-target search. We assume that the observer reports target-absent if none of the comparisons exceed a response criterion D,T). Conditional probabilities for hits and false alarms are obtained as 1-P(target-absent), depending on whether the target is present or absent in the display.In Eq. \u03bb and 18 \u03bc parameters . However, the experimental design yields only 12 observations for each subject . To derive useful and testable models, we therefore impose restrictions on the model parameters: First, we note that model predictions do not change if the same constant value is added to all parameters appearing together in a product term = 0 . Target discriminability is modelled as the difference between the means of the distributions for the target and those for similar and dissimilar distractors. For similar distractors, d'S = \u03bc-\u03bc. For dissimilar distractors, d'D = \u03bc-\u03bc. We then distinguish between noise- and capacity-limited search. In the former, we assume that d\u2019 is independent of set size and search type, i.e., d\u2019 = d\u2019(C). As a shorthand, we set d\u2019(C=s) = d\u2019S and d\u2019(C=d) = d\u2019D. For capacity-limited models, we assume that d\u2019 scales with the inverse root of the total number of comparisons in each search: with d\u2019(C=t)=0, this can be written as d\u2019 = d\u2019S/\u221a(D*T) with d\u2019S =d\u2019. This parameterisation produces negative d\u2019 values for distractors that are differentiated from the target distribution on the basis of their dissimilarity from the cue, while scaling d\u2019 by 1/\u221a(D*T) conforms to the decline in discriminability predicted by the SDT sample-size model of search = \u03bb(D*T), as the number of comparisons is the product of set size and cued orientations. The various \u03bb values are denoted \u03bb1, \u03bb2, \u03bb4, \u03bb8. Equation Applying Eq. as there is one cue-target comparison (CL-TL), three comparisons with similar distractors (CL-DL & CR-DR * 2) and four comparisons with dissimilar distractors .\u03bb. To represent the qualitative distinction between cues during the comparison process, SIT derivations contain separate terms to compute similarity values for objects depending on which template is active during search. For example, the estimated hit rate for the noise-limited SIT model on a dual-target search with a set size of 4 is given byIn contrast to MIT models, single-item template (SIT) models simulate search guided by a single attentional template. On a dual-target search, target detection is based on pairwise comparisons between the active attentional template and the objects in the display. We assume that the observer reports target absent if these comparisons fall below a response criterion d\u2019 is independent of set size, i.e., d\u2019 = d\u2019(C). For the capacity-limited model, d\u2019 scales with the inverse root of the number of objects in the display, because the number of active attentional templates is fixed at 1. With d\u2019(C=t)=0, this can be written as d\u2019 = d\u2019S/\u221aD with d\u2019S =d\u2019. Similarly, we set \u03bb = \u03bb(D) to model decision-noise and changes in threshold as a function of the number of comparisons between one attentional template and objects in the display. The various \u03bb values are denoted \u03bb1, \u03bb2, \u03bb4.In this equation, the observer randomly selects one of the cues and performs a single-target search based on the selected (active) attentional template. If the selected template matches the target in the display, performance will be equivalent to a single-target search. If the selected template matches the target that does not appear, hits are highly unlikely, because the cue-target comparison will yield a low similarity value. Accuracy across dual-target searches is predicted to be the average of the two terms. As for MIT models, we also distinguish noise- and capacity-limited derivations of SIT search. For the former, we assume d\u2019 and three or four \u03bb values) producing 5 or 6 error degrees of freedom. Fitting was carried out using maximum likelihood estimation and goodness of fit was assessed using a parametric bootstrap technique for Pearson\u2019s \u03c72 statistic. To do this, SDT models were fitted to each observer\u2019s data to compute the observed \u03c72 statistic. Parameter estimates for the fitted model were then used to simulate observations over K repetitions of the experiment (we used K = 500). By re-fitting and computing the \u03c72 statistic for each repetition, the \u03c72 distribution that would derive under the fitted model was estimated. Comparing the observed \u03c72 statistic against this \u03c72 distribution yielded the probability that the fitted model generates data that are at least as extreme as those observed. The larger this probability, the \u2018more typical\u2019 the observed data are under the fitted model, and the stronger the evidence for its plausibility. As an additional, more intuitive measure of fit, we also calculated the mean absolute difference between corresponding observed and estimated probabilities. For brevity, we report mean group estimates in the text. Model estimates for individuals are reported in the Electronic Supplementary Material at a set size of one.t(10) = 1.31, p > 0.20, Cohen\u2019s d = 0.34), and the remaining analyses collapse responses across both targets during search. Table F = 144.80, p < 0.001 , \u03b7p2 = .94, set size, F = 138.80, p < 0.001, \u03b7p2 = .93, and a significant Search Type by Set Size interaction, F = 4.97, p = 0.018, \u03b7p2 = 0.32. Two-template SDT models predict a reciprocal relationship between search accuracy and the number of comparisons required to classify the display. Planned contrasts to compare single- and dual-target searches across equivalent set sizes revealed non-significant differences at two, t(10) = .14, p = 0.89, d = 0.04, and four comparisons, t(10) = .67, p = 0.52, Cohen\u2019s d = 0.21, respectively. These data provide initial evidence that accuracy is related to the number of comparisons required to classify the display rather than qualitatively different strategies of attentional control on single- and dual-target searches.Accuracy for leftward and rightward leaning targets did not differ significantly and response bias (\u03bb) across equivalent set sizes, as well as statistical comparisons for the four models. All reveal a monotonic increase in \u03bb as a function of set size. Estimates of d\u2019D were smaller than those for d\u2019S for the both SIT models and the MIT, noise-limited model, reversing the expected pattern of discriminability for targets among similar and dissimilar distractors. Allowing d\u2019s to scale with set size produced less variable \u03bb estimates, and the expected relationship between d\u2019S, and d\u2019 D for the MIT model only. AIC values were also smaller for the MIT than SIT models, with the smallest value obtained under the MIT, capacity-limited model \u2013 indicating a better fit between the observed and predicted hits and false alarms when accuracy was inversely scaled by the product of set size and the number of cued orientations. This group-level advantage was replicated at the individual level, where AIC values were smallest and goodness-of-fit indices largest for the MIT, capacity-limited model for 10/11 observers. The mean absolute difference between observed and estimated data was also smaller for the MIT, capacity- (0.03) than the MIT, noise-limited (0.05), and SIT, capacity- (0.07) and noise-limited (0.06) models ). This interpretation is consistent with capacity-limited decision processes based on the concurrent evaluation of similarity distributions for independently cued targets.SDT models of single-target search have been successfully used to characterise the relationships between decision-noise, discriminability and set size for different stimuli . All reported normal or normal-to-corrected visual acuity.Twelve observers were recruited to the study: Two were male and age ranged from 18 to 52 years . The display resolution was 1,280 \u00d7 1,024 pixels and the frame rate 100 Hz. Stimulus presentation and data collection were controlled using custom-built software in MATLAB with Psychophysics toolbox extensions Landholt\u2019s Cs presented on a black background . Cs subtended 5.0\u00b0 \u00d7 5.0\u00b0 at equally spaced locations on the circumference of a virtual circle with a radius of 7.0\u00b0. In contrast to Experiment Displays contained four red and stimulus type . On each block, Target-Left and Target-Right orientations were sampled from 90\u00b0 and 270\u00b0 \u00b1 15\u00b0 to 30\u00b0, respectively. Distractor orientations were assigned as Target-Left and -Right \u00b1 25\u00b0. Target and distractor orientations were fixed and the order of presentation for each target and search type were randomly assigned within each block.C-shape at the orientation assigned to Target-Left or Target-Right and one annulus. On dual-target searches the cue contained two C-shapes at the orientations assigned to Target-Left and Target-Right. Single- and dual-target cues were presented for 500 ms and 1,000 ms, respectively, before being replaced by a fixation cross for 1,000 ms. Search displays followed and always contained four objects \u2013 a cued target and one similar and two dissimilar distractors \u2013 which were randomly assigned to the four possible locations. Observers were instructed to fixate the object that matched a cue, and responses were measured using a box criterion to record the first fixation falling within the 5.0\u00b0 \u00d7 5.0\u00b0 area surrounding each C-shape in the display. This method excludes saccades falling outside stimulus locations but produces results similar to the nearest endpoint criteria , observers were prompted to press a key to begin each block of trials.Observers completed a short practice followed by four blocks of 60 trials in a single experimental session. Before each block, a 5-point calibration sequence was performed to ensure fixations to each i = 1 to 2) with those in the display (j = 1 to 4) and fixates the comparison that produces the maximum similarity sij to select the most likely target. The probability of the cue-object comparison to yield maximum similarity is given byThe observer\u2019s task on each trial is to indicate which of four stimuli match the cue by fixating the target. Our statistical description of this task uses the SDT framework derived for Experiment p,D,T) denotes the probability density associated with P,D,T). The equation can be interpreted as follows: The product term in the integrand describes the probability that all comparisons different from have results less than y. Weighing this value with the probability density for obtaining response y in the comparison and integrating over all possible values of y yields the total probability for selecting comparison .In Eq. D fixed to 4 and setting \u03bc = 0 as in Experiment s) and dissimilar (d) distractors on single- (ST) and dual-target (DT) searches; d\u2019(sST), d\u2019(dST), d\u2019(sDT) and d\u2019(dDT). For each observer, there are four observed quantities, i.e., the probabilities for selecting the target and similar distractor on single- and dual-target searches, respectively (the probability for choosing the dissimilar distractor is fixed because of P(t)+P(s)+2P(d) = 1). Without any further restrictions on the d\u2019 parameters, the model is fully identified and would reproduce the observations perfectly. However, in analogy to Experiment d\u2019(sST) = d\u2019(sDT) = d\u2019S and d\u2019(dST) = d\u2019(dDT) = d\u2019D. We also compute estimates for a capacity-limited MIT model in which d\u2019 scales with the number of comparisons required to search for one or two cued-targets = d\u2019S, d\u2019(sDT) = d\u2019S/\u221a2 and d\u2019(dST) = d\u2019(dDT) = d\u2019D). As set size is fixed across search types in this experiment, scaling d\u2019s by \u221a2 models a capacity limit that is specific to the MIT model when VSTM resources are distributed across independent attentional templates during search. All three models have two free parameters and 2 error degrees of freedom at the subject level.With i,j) but only the corresponding display object j. Therefore, the statistical model has to provide expressions for P(j) rather than P. This does not cause any complications for single-target searches, as there is only one cue i=1 and P(j) = P. For dual-target searches in the MIT model, saccades to object j could be based on the comparison for either cue, i.e., or so that the corresponding probabilities have to be added: P(j) = P+P. For the SIT model, the overall probability of fixating the target in a DT search is given byThe subject does not directly report the chosen comparison . For MIT models, the overall probability of selecting the target object in the noise-limited model is given byThis equation is interpreted as follows. On a dual-target search, the participant performs one target comparison, three similar-distractor comparisons and four dissimilar-distractor comparisons. The first integral describes the probability that the subject selects the comparison between the target cue and target object. The second pertains to choosing the comparison between the cued target that is not present and the target object, which is considered a dissimilar-distractor comparison . Models are again fitted using maximum-likelihood estimation. For the computation of integrals, we make use of the fact that \u03c6 is a normal distribution and apply Gauss-Hermite quadrature . Accuracy for Target-Left and Target-Right were not significantly different, t(11) = 0.07, p > 0.95, Cohen\u2019s d = 0.019, and the following analyses collapse responses across both targets. Table t(11) = 16.61, p < 0.001, Cohen\u2019s d = 4.80, and increase in fixations to dissimilar distractors, t(11) = 12.32, p < 0.001, Cohen\u2019s d = 3.56, on dual- compared to single-target searches. The proportion of fixations to similar distractors for each type of search was not significantly different, t(11) = 0.52, p > 0.05, Cohen\u2019s d = 0.15.Trials in which fixations to a display object were recorded in less than 100 ms or longer than 3 standard deviations from the mean of single- and dual-target searches, respectively, were excluded from further analyses = 3.79, p = 0.038, \u03b7p2 = .26, and a significant Search by Object type interaction, F = 8.44, p = 0.002, \u03b7p2 = .43. The main effect of search did not reach statistical significance, F = 4.57, p > 0.05, \u03b7p2 = .29, Post hoc tests revealed median saccadic latencies to dissimilar distractors were significantly faster on single- than dual-target searches , while differences to targets and similar distractors were not statistically significant.Our focus in this study was to investigate changes in the accuracy of saccadic targeting during single- and dual-target searches. In Experiment Figure The results of Experiment d\u2019 is independent of the number of comparisons required to localise the target. Group and individual comparisons also reveal better fit indices for the MIT, capacity-limited than the SIT, noise-limited model, supporting saccadic targeting based on separate similarity distributions for each cue. As with Experiment Figure The current study was designed to investigate the nature of the dual-target cost on perceptual decisions and saccadic targeting during covert and overt search. To do this, we applied SIT and MIT SDT models to predict observers\u2019 accuracy when comparisons between visual input and VSTM are limited to (1) a single-item attentional template or (2) multiple-item templates for independently cued targets during dual-target search. In addition, we compared observers\u2019 accuracy to noise- and capacity-limited derivations for both models of attentional control. Each assumes search entails the comparison of noisy internal representations of visual input with one or two cued-orientations. Noise-limited derivations model the dual-target cost as an increase in stochastic noise when the number of comparisons doubles on dual- compared to single-target searches. Capacity-limited derivations apply an inverse square-root relationship to model an additional dependency between discriminability and the number of comparisons during search"} +{"text": "Namely, every clause must contain at least two satisfied literals. Because of its robustness, super solutions are concerned in combinatorial optimization problems and decision problems. In this paper, we investigate the existence conditions of the -super solution of Then the solution is an -super solution. For example, the solution In combinatorial optimization problems and decision problems, the robustness of solutions is a valuable property. A robust solution is not sensitive to small changes in dynamic and uncertain environments, and guarantees the existence of a small set of repairs when the future changes in a small way. R. Weigel and C. Bliek in introduca,b)-super solution of a CNF formula with special structure is undoubtedly of realistic significance. Therefore, we focus on some special k,s)-SAT has a SAT-UNSAT transition phenomenon and investigate the characteristics of the transition phenomenon.Encoding into a CNF formula is a common way to solve a practical problem. These CNF formulas often have some regular structures. A -super solution is a generalization of Supermodels in SAT problems. Some algorithms of finding super solutions were presented in [An -SAT where each clause has exactly k distinct literals. Zhang P in [k-SAT is in P for Specially, a -super solution is a satisfying assignment such that if any one variable is flipped to the opposite value, the new assignment is still a satisfying assignment. In other words, a -super solution of a CNF formula must satisfy at least two liters of every clause. The decision problem whether a CNF formulas has a -super solution is denoted as -SAT. -ang P in proved thou G in obtainedk-SAT is NP-complete. It implies that for k-SAT can not be solved in polynomial time. What happens if the number of occurrences of each variable is limited? So we expand -k-SAT to -d-regular d-regular k distinct literals per clause and at most s occurrences of each variable. A regular -CNF formula is a k-CNF formula, in which each variable occurs exactly in s clauses. A d-regular d. The NP-completeness of the special SAT problems worth further study.-3-SAT problem is in P. This shows that there is a polynomial time algorithm to decide whether a 3-CNF formula has a -super solution. For Kratochv\u00edl in pointed (i) for (ii) for s such that all shown in . In [12,shown in ,14,15, ik-SAT to -In this study, we give a polynomial time reduction that transforms (i) every (ii) -That is to say, for x or a negated propositional variable x is called a positive literal, and C is a disjunction of literals, F is a conjunction of clauses, F, and F. x in F.A literal is a propositional variable F, if F but their variable sets are disjoint. We divide variables into forced variables or unforced variables. A forced variable is a variable forced a same value by all satisfying assignments of a formula.If the formulas F is denoted by d-Hamming neighbourhood F is a satisfying assignment such that for any A satisfying assignment is also called a solution. The set of solutions of a CNF formula Definition\u00a01.A CNF formula is called a regular Definition\u00a02.For each Definition\u00a03.\u03a8 is a forced-A -CNF formula (i) there exist two variables \u03a8 has a -super solution and for any -super solution \u03c4 of \u03a8, it holds that (ii) Definition\u00a04.The projection Lemma\u00a01.An assignment is a -super solution of F if and only if it satisfies .An assLemma\u00a02.If .If k\u22653Lemma\u00a03.The critical function .The crLemma\u00a04.An instance of SAT is satisfiable if:.An ins(i) all of its clauses contain more than one variable, and(ii) each variable appears exactly once complemented and once uncomplemented.Lemma\u00a05.If the representation matrix of a formula F isthen the formula is satisfiable and every satisfying assignment forces all variables to a same value.ith variable occurs positively in the jth clause; if ith variable occurs negatively in the jth clause; if F expresses a cyclic of implicationHere, if an element A -super solution has certain robustness, but some CNF formula with special structures may not have a -super solution.Theorem\u00a01.Every regular -CNF Formula without pure literals does not have a -super solution.Proof.\u00a0F is a regular -CNF without pure literals, then each clause has exactly 3 distinct literals and each variable occurs exactly 2 times (one negative occurrence and another positive occurrence). Suppose, F has F. However, a -super solution must satisfy at least two literals of every clause. That is to say, a -super solution must satisfy at least F. Clearly, there is not a -super solution. Therefore, no regular -CNF Formula without pure literals has a -super solution.\u2003\u25a1If a CNF formula By bounding the difference between positive and negative occurrences of every variable, we find other CNF formulas without a -super solution.Corollary\u00a01.For Proof.\u00a0F is a 1-regular s times, and the absolute value of the difference between positive and negative occurrences of each variable is at most 1. Suppose F has F, but every -super solution must satisfy at least F. For Let F. Suppose F has F . Because every -super solution must satisfy at least F, only an assignment F. In other words, if F does not have a -super solution. That is, if the formula has a -super solution, the -super solution must be Now we consider an arbitrary 1-regular -CNF formula Moreover, we also find some Theorem\u00a02.Every regular Proof.\u00a0m clauses. By definition, each clause has exactly 4 distinct literals, and each variable occurs exactly in 2 clauses (one negative occurrence and another positive occurrence). We divide each clauses Suppose iable in . SupposeTheorem 2 can be considered as an extension of Hall\u2019s Marriage Theorem. Suppose we have a finite set of single men and women. If each man is attracted by two women and each woman is attracted by four man, then there must be a way that each woman could be married with two men at the same time.Using Lemma 4, it is easy to obtain the following corollaries by using the proof method of Theorem 2.Corollary\u00a02.For Corollary\u00a03.A CNF formula must have a -super solution if:(i) each clause has at least 4 distinct variables and(ii) each variable occurs exactly in two clauses(one negative occurrence and another positive occurrence).For the formula in Corollary 3, all clauses may not have the same number of variables.Theorem\u00a03.All -CNF formulas and -CNF formulas must have a -super solution.Proof.\u00a0Suppose 8)\u226429 in , all -CNF formulas must have a -super solution by 9)\u226451 in .\u2003\u25a1In , it has In this section we study in what conditions -Theorem\u00a04.For Proof.\u00a0We will present a polynomial time reduction method from -super solution of the formula f.Step 2 Let f with the variables f being renamed as Step 3 We construct the formula s clauses. So Obviously, every clause of It is supposed that It is supposed that For The critical function Corollary\u00a04.If Proof.\u00a0The statement follows directly from Lemma 2 and Theorem 4.\u2003\u25a1By Hall\u2019s Marriage Theorem, -CNF and -CNF are satisfiable. But --SAT and --SAT are NP-complete because -SAT and -SAT are NP-complete.Next, we make some modifications of the reduction method in the proof of Theorem 4. We introduce a sufficient number of new variables, add some new clauses to make up the gap of the occurrence number of every variable, and guarantee that every new clause have at least a positive occurrence of two new variables -super solution). Because Corollary\u00a05.If Here, -regular -super solution have a -super solution. First we flip a unsatisfied literal of any one clause with only one satisfied literal. If the formula still does not have a -super solution, then we flip a unsatisfied literal of other clause with only one satisfied literal, until the formula has a -super solution. When the formula has a -super solution, the flipped literal is a key-literal.It is supposed that a can make c and only two literals of the flipped clause. This also indicates that all -super solutions satisfy the literal y -super solution. There are two cases to consider.It is supposed that a -super solution. Lemma 6 entails that we can construct a forced-k-SAT to an instance of --(s)-SAT exists, then Z occurs in three clauses and every variable of X occurs in two clauses. Obviously, the formula Using Corollary 2, if n cyclic of implication. That is, if It is assumed that an assignment It is assumed that k-SAT is NP-complete, - every ((ii) --super satisfiability of (k. Besides, for larger values of k, the upper and lower bounds of Although we obtain some results about"} +{"text": "This is a comment on \u201cManifestations of Sasang Typology according to Common Chronic Diseases in Koreans\u201d recently published by Hong et al. that exaAs for the SPQ measuring biopsychological characteristics of Yin-Yang personality traits, the SPQ and its subscale scores of DM, hypertension, FD, and MDD were correctly illustrated, but their implications were not discussed in the context of recently published reports , 5. The The SPQ-E score of the FD group was significantly higher than that of the hypertension and adenomyosis groups, and the MDD group was similar to the FD group . A persoAs for the SDFI measuring good digestive function and appetite, the article of Hong et al. made crir\u2009=\u2009\u22120.585) and Functional Dyspepsia-Related Quality of Life (r\u2009=\u2009\u22120.433), while the SDFI-E has a positive correlation with Dutch Eating Behavior Questionnaire (r\u2009=\u20090.481) [r\u2009=\u20090.299) and SDFI-D (r\u2009=\u20090.310) scores [The operational definition of SDFI-D is a measure of good and hyperactivated digestive function, and a person with high SDFI-D should have a high BMI , 6\u20138. A r\u2009=\u2009\u22120.43, while In the article by Hong et al. , the BMIFrom this perspective, the high score of SDFI-D would be a typical clinical feature of metabolic disease showing DM, hypertension, and obesity (high BMI) distinguished from other chronic diseases. The FD patients with low SDFI-D and high SPQ-E scores might be recognized as bad or hypoactive digestive function along with psychopathological vulnerability in traditional Korean medicine.The SDFI-A score of MDD group was significantly lower than that of DM, hypertension, FD, and adenomyosis groups in Hong's study . ConsideAs a conclusion, with consideration of the latest clinical studies \u20137, the c"} +{"text": "The generator matrices of polar codes and Reed\u2013Muller codes are submatrices of the Kronecker product of a lower-triangular binary square matrix. For polar codes, the submatrix is generated by selecting rows according to their Bhattacharyya parameter, which is related to the error probability of sequential decoding. For Reed\u2013Muller codes, the submatrix is generated by selecting rows according to their Hamming weight. In this work, we investigate the properties of the index sets selecting those rows, in the limit as the blocklength tends to infinity. We compute the Lebesgue measure and the Hausdorff dimension of these sets. We furthermore show that these sets are finely structured and self-similar in a well-defined sense, i.e., they have properties that are common to fractals. The fractal dimension of In his book on fractal geometry, Falconer characterizes a set operties still eludes us. Nevertheless, our results may apply in areas beyond channel coding: Ar\u0131kan\u2019s polarization technique was used to polarize R\u00e9nyi information dimension and to cSince we consider the case Lemma\u00a01Let W20y2|u1:=12\u2211W20y2|u1: channelW21=2I(W) (ProposiLemma\u00a02 (Bounds on the Bhattacharyya Parameter).with equality in Proof.\u00a0W is a BEC [The equality and inequality in follow f\u22641, from the cardinality of the output alphabet increases exponentially in If we stop the polarization procedure at a finite blocklength ly in 2n (Chapterly in 2n (p. 36).ly in 2n .A channel degraded w.r.t. the channel W channels remains upgraded (degraded) during polarization:Lemma\u00a03Suppose that ( (Lemma 4Lemma\u00a04( (p. 9) &Proof.\u00a0By choosingls, take .Let Let W is nontrivial, i.e., that If Proposition\u00a02 (Denseness).Proof.\u00a0See f is not injective. It is not obvious, however, that the intersection exhausts the set on which f is non-injective. A consequence of this proposition is that there is no interval that contains only good channels. This has implications for code construction techniques. Indeed, the authors of [It is not really surprising that thors of ,19 suggeProposition\u00a03 (Symmetry).There exists a function \u03d1, defined for almost all values in Proof.\u00a0See alignment of the sets W and Proposition 3 has two implications. The first implication concerns the negative . Indeed,arameter .The second implication is that, at least for BECs, the sets recurring, i.e., there is a length-k sequence z close to zero and z close to one, the operation x, It is possible to define Example\u00a02.Let Now suppose that W is a BEC with Bhattacharyya parameter Proposition 4 (Lebesgue Measure & Hausdorff Dimension).Proof.\u00a0See W. The fact that W. A positive Lebesgue measure and a Hausdorff dimension equal to one are not indicators of fractality.Loosely speaking, the Lebesgue measure of quasi self-similar. Along the same lines, the quasi self-similarity of The last fractal property we consider is self-similarity. As Falconer notes (p. xxviProposition\u00a05 (Self-Similarity).Let quasi self-similar in the sense that, for all n and all k, If W is symmetric, Proof.\u00a0See In other words, at least for a symmetric channel, self see . The selExample\u00a03.We want to determine whether r, length-Hamming weight of i-th row of r, length-An order-dexed by (20)F={iTo analyze the effect of doubling the block length, note thati-th row be indexed by Assume that we indicate the rows of his with yieldswDefining n tends to infinity. An important ingredient in our proofs is the concept of normal numbers.In Definition\u00a03 .A number simply normal to base 2.Let fractional Hamming weight larger than a given threshold.Loosely speaking, the set of heavy codewords corresponds to those rows of Example\u00a05.Proposition\u00a06 (Denseness).For all n to be even and set n, and one needs to depart from intuition based on these finite-blocklength considerations.Similarly as for polar codes, also Reed\u2013Muller codes are such that no interval is contained in either Proof.\u00a0See Proposition 7 (Lebesgue Measure & Hausdorff Dimension).The Hausdorff dimension satisfieswhere Proof.\u00a0See W. In contrast, the set W. Rather, Proposition 7 suggests that the order parameter phase transition for the rate of Reed\u2013Muller codes: If Loosely speaking, the Lebesgue measure of Let us briefly consider the case The sets cale cf. . We nextal codes (ProposiProposition\u00a08 (Self-Similarity).Let quasi self-similar in the sense that, for all n and all k, Proof.\u00a0See The set has a fine structure, i.e., there is detail on arbitrarily small scales;It does not admit a description in traditional geometrical language, neither locally nor globally; it is irregular in some sense;It has some form of self-similarity, at least approximate or statistical;The fractal dimension of the set exceeds its topological dimension.That Kronecker product-based codes possess fractal properties has long been suspected. The present manuscript contains several results that back this suspicion with solid mathematical analyses. Specifically, we assumed that the blocklength tends to infinity and investigated the properties of the set operties (p. xxviIndeed, the sets R. In other words, while fractional order of the code, R and that, thus, its Hausdorff dimension equals one. An appropriate definition of R, length-One reviewer pointed out that our definition of q is prime. One can show that this matrix is polarizing as long as it is not upper-triangular [\u2113-ary expansion of real numbers in q-ary Reed\u2013Muller codes, e.g., [Another obvious extension of our work are non-binary polar and Reed\u2013Muller codes. For example, consider an iangular (Theorems, e.g., ,24."} +{"text": "Scientific Reports 10.1038/s41598-019-41202-4, published online 20 March 2019Correction to: The original PDF version of this Article contained a typographical error in the publication date \u201820th March 2019\u2019 which was incorrectly given as \u201818th March 2019\u2019. This has now been corrected in the PDF version of the Article. The HTML version was correct at the time of publication."} +{"text": "Nature Communications 10.1038/s41467-020-17611-9, published online 30 July 2020.Correction to: The original version of this Article contained an error in the Abstract, in which the word \u2018erosive\u2019 was misspelled \u2018eros0ive\u2019. This has now been corrected in the PDF and HTML versions of the Article."} +{"text": "R \u22481.In a comment on our recent paper , the comR \u2248 1. We do not do that. Throughout the paper and its extensive supporting information, we emphasize that the derivation of the estimates for the critical connectivity R \u2248 1, or, equivalently, D \u2248 Dc.We are surprised to see that the commentators in ref. R \u2248 1 or D \u2248 Dc are not sharp, and, as we state explicitly in ref. R \u2248 1. Within the context of the entire paper (1) and its supporting information, this is completely clear. It seems that the commentators in ref. The critical values A in ref. A in ref. In addition, the analysis of close-to-critical spreading on Erd\u0151s-R\u00e9nyi (ER) networks in ref. It is fortunate, however, that this comment gives us the opportunity to cite a work that noted the possibility of linear spreading of diseases on SW networks before (figure 5 in ref."} +{"text": "Here, the zero-temperature phase behavior of bosonic particles living on the nodes of a regular spherical mesh (\u201cPlatonic mesh\u201d) and interacting through an extended Bose-Hubbard Hamiltonian has been studied. Only the hard-core version of the model for two instances of Platonic mesh is considered here. Using the mean-field decoupling approximation, it is shown that the system may exist in various ground states, which can be regarded as analogs of gas, solid, supersolid, and superfluid. For one mesh, by comparing the theoretical results with the outcome of numerical diagonalization, I manage to uncover the signatures of diagonal and off-diagonal spatial orders in a finite quantum system. Gases of ultracold bosonic atoms loaded in an optical lattice provide the unique opportunity to study quantum many-body effects under controlled conditions ,2. To a In a system at zero temperature . Experig., Ref. or dysprg., Ref. ) and molt, in absolute terms) to the on-site interaction (U) is reduced. The overall number density of particles is controlled via a chemical-potential parameter t and U, the lattice becomes increasingly filled with particles, but this can only occur outside the Mott regions since the insulator phase is incompressible [V is introduced between nearest-neighbor atoms (\u201cextended BH model\u201d), new phases may arise, in primis a supersolid phase [The usual BH model predicts a ressible . The BH ressible ,10,11, pressible ,14,15,16ressible ,18, to nid phase ,25,26,27\u2192\u221e limit ,30,31,32I hereafter present the results of yet another investigation of the extended BH model, now choosing a finite graph as hosting space for bosons. Even though clearcut phase transitions cannot occur in a few-particle system, a convenient choice of boundary conditions may alleviate the difference with an infinite system, making the study of a finite quantum system valuable anyway. A practical solution is to use spherical boundary conditions (SBCs), which have often been exploited in the past to discourage long-range triangular ordering at high density ,36,37,38The plan of this paper is the following. After introducing the models in i can only be 0 or 1 . In theThe rationale behind the choice of a cubic mesh is now clear: by introducing a repulsion between occupied NN sites, we promote the occurrence of a Platonic \u201ccrystal\u201d, i.e., the regular tetrahedron , the total energy (E), and the order parameters for tetrahedral are considered below.H describes a system of bosons on a spherical mesh of M sites, U term in a density wave at low t, as well as a supersolid phase at the boundary between the insulator and superfluid phases. These features are also present in the hard-core limit, as reported, e.g., in Refs. [In the BH model on a standard lattice, at in Refs. ,32.MF theory is the method of choice when a new many-body problem is attacked; it has been frequently applied for continuous quantum systems as well, as an effective means to identify the ground states and quantum transitions between them . i, respectively. For hard-core bosons, we readily obtain i-th site. For a bipartite mesh, sites are either of type A or B, hence the unknown parameters are four, namely In the decoupling approximation, the two-site terms in the Hamiltonian are lineHMF withHMF=\u2212t\u2211iFA has three neighbors, all belonging to grid B, and vice versa. Hence,A (B) for operators relative to a single A (B) site, the MF Hamiltonian reads:B \u201ccrystal\u201d), A \u201ccrystal\u201d), and For the QCT model, the mesh is bipartite and formed by two tetrahedral sub-meshes with four points each. A point of grid an reads:HMF=E0\u221212H for A and B; hence, our search can be restricted to homogeneous (superfluid) solutions: E isSuperfluid and supersolid \u201cphases\u201d have non-zero, possibly distinct, complex values of rrive at(\u03bb\u2212b)2(\u03bb\u2212root of (E=b\u2212(a\u2212b)oot of (1E=b\u2212(a\u2212b) to give\u03c1=\u03bc+3t3V+By comparing the grand potentials of all the \u201cphases\u201d, we arrive at the ground-state diagram in A, 8 sites) and a co-cube . We have:The decoupling approximation for the QDC model works similarly as for the QCT model. Again, the starting point is Equation and the en reads:HMF=E0\u221224Moving to phases with o be see :(31)E=12Equation ).E satisfy the following linear system: Using he state describia priori assuming them to be equal in pairs and diagonalizing the ensuing matrix. An extensive mapping of a few quantum averages enables us to clarify the nature of the \u201cphases\u201d present.Let us again reconsider the QCT model at A or B, they belong to. Then, I compute the average occupancies of A and B sites . In A and a B site, with the only exception of A-B inversion are the same. I show in N operator, implying that the ground state (which is non-degenerate for N. Like in MF theory, the occupancy is zero below t, with no abrupt transition from \u201ccrystal\u201d to superfluid values.As far as the occupancies are concerned, exact diagonalization shows that they are always equal for an t and t lines; each jump discontinuity of Another difference with MF theory concerns the ground-state averages of Equation . In the t values in the range 0 to 0.5. In the MF curves, cusp singularities are associated with the crossing of first-order transition lines. We see that MF theory systematically overestimates exact values, the more so the larger t is.Finally, in I have worked out the zero-temperature phase diagram of two systems of hard-core bosons defined on the nodes of a regular spherical mesh. The interaction is of Bose\u2013Hubbard type, with a further repulsion between neighboring particles. Choosing a suitable mesh, bosons are pushed to form a Platonic \u201ccrystal\u201d in a range of chemical potentials. In the QCT model, the mesh is cubic and a tetrahedral \u201ccrystal\u201d is formed; in the QDC model, the mesh is dodecahedral and a cubic \u201ccrystal\u201d is formed instead.Using a mean-field approximation, I have obtained fully analytic results for the thermodynamic properties of the two models. In addition to a number of insulating phases, both systems also exhibit a superfluid ground state. In the QDC model, triple coexistence between two solids and the superfluid is superseded by the occurrence of a more stable supersolid phase. Clearly, while the predictions of mean-field theory are generally accurate for an infinite Bose\u2013Hubbard system, deviations will unavoidably be observed in a small system, where no true singularity can occur. However, discrepancies are probably less strong for bosons on a regular spherical mesh, which has equivalent sites and is devoid of natural boundaries.To check this expectation for the QCT model, I have diagonalized its Hamiltonian exactly, finding the ground state and a number of ground-state averages. Overall, the exact"} +{"text": "Scientific Reports, 10.1038/s41598-020-71679-3, published online 09 September 2020Correction to: The original version of this Article contained an error in the title of the paper, where the word \u201cHumid\u201d was incorrectly given as \u201cHuman\u201d. This has now been corrected in the PDF and HTML versions of the Article, and in the accompanying Supplementary Information file."} +{"text": "We read with interest the paper by Jun S.Y. et al. dealing As survival rate is similarly low in patients with Crohn\u2019s disease-associated SBA and those with sporadic SBA ,5,6, it"} +{"text": "This paper studies index coding with two senders. In this setup, source messages are distributed among the senders possibly with common messages. In addition, there are multiple receivers, with each receiver having some messages a priori, known as side-information, and requesting one unique message such that each message is requested by only one receiver. Index coding in this setup is called two-sender unicast index coding (TSUIC). The main goal is to find the shortest aggregate normalized codelength, which is expressed as the optimal broadcast rate. In this work, firstly, for a given TSUIC problem, we form three independent sub-problems each consisting of the only subset of the messages, based on whether the messages are available only in one of the senders or in both senders. Then, we express the optimal broadcast rate of the TSUIC problem as a function of the optimal broadcast rates of those independent sub-problems. In this way, we discover the structural characteristics of TSUIC. For the proofs of our results, we utilize confusion graphs and coding techniques used in single-sender index coding. To adapt the confusion graph technique in TSUIC, we introduce a new graph-coloring approach that is different from the normal graph coloring, which we call two-sender graph coloring, and propose a way of grouping the vertices to analyze the number of colors used. We further determine a class of TSUIC instances where a certain type of side-information can be removed without affecting their optimal broadcast rates. Finally, we generalize the results of a class of TSUIC problems to multiple senders. In this scenario, if the sender is informed about the side-information available at all receivers, then it can leverage that information whilst encoding to reduce the required number of broadcast transmissions, in comparison with a naive approach of transmitting all requested messages uncoded and separately. Such an encoding process is called index coding, and the resulting sequence of coded messages is known as an index code. Moreover, each receiver upon receiving the index code will be able to decode its required message by utilizing its side-information. The main aim of index coding is to find the optimal (shortest) codelength and the corresponding coding scheme. Index coding was introduced by Birk and Kol [Consider a communication scenario over a noiseless channel where a sender is required to broadcast messages to multiple receivers, each caching some messages requested by other receivers a priori. The messages cached at each receiver is known as its and Kol ,2, and f and Kol ,11,12,13Macro-cell networks with caching helpers \u2014cellularcooperative data exchange \u2014peer-to-distributed storage\u2014storage networks where data are distributed over multiple storage devices/locations.Most existing works on index coding deal only with a single sender, capturing scenarios with centralized transmissions. However, many communication scenarios such as the following have messages distributed among multiple senders:In addition, each sender can be constrained to know only a subset of the total messages due to reasons such as limited storage, or error whilst receiving some messages over noisy channels, or server failure to deliver all messages. In this case, distributed transmissions are required, where multiple senders broadcast messages to the receivers. One metric to maximize the transmission efficiency in this scenario is to minimize the aggregate number of transmissions from all senders in such a way that all receivers\u2019 demands can be fulfilled. As this problem is more general than an index-coding problem with a single sender and is of practical interest , it is a useful research avenue to study index-coding problems with multiple senders, known as multi-sender index-coding problems.broadcast-rate formulation of the problems. In their work, they devised lower and upper bounds on the optimal broadcast rate by implementing a graph-theoretic approach. The results were established using information-flow graphs, which represent receivers\u2019 request, and message graphs, which represent senders\u2019 message setting. Furthermore, they showed problem instances for which the upper and lower bounds coincide. A class of such instances is where no two senders have messages in common.The multi-sender index-coding problem was first studied by Ong et al. . They counicast message setting, meaning each message is requested by only one receiver, each receiver requests only one message, and each receiver knows a subset of messages requested by other receivers a priori. Based on graph-theoretic approaches, they established upper bounds on the optimal broadcast rate. In particular, they focused on the two-sender case, called two-sender unicast index coding (TSUIC). They extended existing single-sender index-coding schemes, namely the cycle-cover scheme [In another work, Thapa et al. considerr scheme ,19, the r scheme and the r scheme to the cSadeghi et al. considerdecentralized data shuffling problems in which the receivers/workers can communicate with one another via a shared link. The decentralized data shuffling phase with uncoded storage (which stores a subset of bits of the data set) is equivalent to a multi-sender index coding problem. For this problem, they proposed converse and achievable bounds that are to within a factor of 3/2 of one another. Moreover, the proposed schemes were shown to be optimal for some classes of the problem. Recently, Porter et al. [embedded index coding (EIC), in which each node acts as both sender and receiver. With the help of several results, they showed the relationship between single-sender index coding and EICs. Furthermore, they developed heuristics to solve EIC problems efficiently.In a recent work by Li et al. , a new rr et al. introducconfusion graphs in index coding [Different approaches have been attempted to solve the multi-sender index-coding problems. However, the problems are more difficult and computationally complex than their single-sender counterparts, and we know very little about the characteristics of the problems. This paper studies the broadcast-rate formulation of TSUIC problems by implementing a graph-theoretic approach. More precisely, in the same spirit of studying structural properties of index-coding capacity in the single-sender case by Arbabjolfaei et al. , we examx coding along wiProposing a new coloring concept for confusion graphs in TSUIC, called two-sender graph coloring . However, for TSUIC, as the two senders (encoders) contain some messages in common, the standard method of graph coloring of the confusion graph may not lead us to an index code. In this regard, we need a different kind of coloring function in TSUIC, and thus, in this paper, we propose a novel coloring technique to color the confusion graphs in TSUIC, and its optimization gives the optimal broadcast rate and optimal index code.Presenting a way of grouping the vertices of confusion graphs in TSUIC : By exploiting the symmetry of the confusion graph, we propose a way of grouping its vertices for analysis purposes mainly in its two-sender graph coloring. In particular, this grouping helps us to analyze the number of colors used in two-sender graph coloring of a confusion graph.Deriving the optimal broadcast rates of TSUIC problems as a function of the optimal broadcast rates of its sub-problems (Theorems 4\u20138): We divide a TSUIC problem into three independent sub-problems based on the requested messages by receivers, specifically whether the messages are present in only one of the senders or in both senders. Now in TSUIC, considering the interactions (defined by side-information available at the receivers) between these three independent sub-problems, we derive the optimal broadcast rate (in both asymptotic and non-asymptotic regimes in the message size) of the problem as a function of the optimal broadcast rates of its sub-problems. Moreover, we bound the optimal broadcast rate, and show that the bounds are tight for several classes of TSUIC instances (sometimes with conditions). Furthermore, we find a class of TSUIC instances where a TSUIC scheme can achieve the same optimal broadcast rate as the same instances when the two senders form a single sender having all messages.Characterizing a class of TSUIC instances where a certain type of side-information is not critical (Corollary 1): For a class of TSUIC instances, we prove that certain interactions between the three independent sub-problems can be removed without affecting the optimal broadcast rate (in the asymptotic regime). This means that those interactions are not critical.Generalizing the results of some classes of TSUIC problems to multiple senders : For some classes of TSUIC problems, we generalize the two-sender graph coloring of confusion graphs and the proposed grouping of their vertices. Then, we compute the optimal broadcast rates of those problems as a function of the optimal broadcast rates of their sub-problems.The contributions of this paper are summarized as follows:After posting the first draft of this paper on ArxivN independent messages t binary bits. There are N receivers S, having all N messages In this paper, we consider unicast index coding. There are Definition\u00a01\u00a0(Two-sender\u00a0index\u00a0code).A two-sender index code ((i)\u00a0an encoding function for each sender (ii)\u00a0a decoding function for every receiver r, r receives sub-codewords from both senders without any noise, and decodes This means that each sender Now, we define the aggregate normalized codelength, which measures the performance of a code Definition\u00a02\u00a0.The broadcast rate of an index code (with a single sender or two senders) is the total number of transmitted bits per received message bit. In TSUIC, it is denoted by achievable for a UIC problem if there exists an index code of normalized length \u2113.For the rest of the paper, we refer to normalized codelength simply as codelength.Definition\u00a03\u00a0.The optimal broadcast rate for a given index-coding problem with t-bit messages is SSUIC and TSUIC. The optimal broadcast rate over all t is defined as \u2019s lemma .Remark\u00a01.With the broadcast rate as a performance metric, we can treat SSUIC as a special case of TSUIC when An index-coding problem can be modeled by graphs, which are defined as follows:Definition\u00a04\u00a0(Directed\u00a0graphs\u00a0and\u00a0undirected\u00a0graphs).A directed graph is an ordered pair From now on in this paper, we call directed graphs simply digraphs, and undirected graphs simply graphs.N receivers, and the arc set i to vertex j if and only if receiver i has message j) in its side-information. Thus, in a side-information digraph, i in D. In this paper, for convenience, a receiver i is also referred to as a vertex i, and vice versa. We also use the compact form of representation of an instance of UIC problems as used by Arbabjolfaei et al. [The receivers\u2019 message setting of a UIC problem is represented by a side-information digraph i et al. , where aprivate messages and common messages defined as follows: Let D, without loss of generality, we define the following sub-digraphs induced by the following vertex subsets that partition D induced by vertices D such that In TSUIC, Definition\u00a05\u00a0(Constraint\u00a0due\u00a0to\u00a0the\u00a0two\u00a0senders).The constraint due to the two senders is the following: whilst encoding, any two private messages sender-constraint graph.In TSUIC, to reflect the senders\u2019 message setting, we introduce an undirected graph, denoted by D and t, and over all t, respectively. As a TSUIC problem is described by D, three sub-problems based on the type of messages are In a TSUIC problem, if there is no common message, i.e., roblems .For simplicity, an interaction between the sub-digraphs H, we get a total of 64 possible cases of the orientation of arcs among its vertices. As the vertices 1 and 2 of H can be swapped because we can interchange H. Now, depending upon the type of orientation of arcs among the vertices of H, we classify all unique cases into two categories: (i) CASE I\u2014Acyclic orientation , and (ii) CASE II\u2014with some cyclic orientation (22 cases). CASE II is further classified into smaller sub-cases II-A, II-B, II-C, and II-D. Refer to H is labeled D defines arcs between them (not within the sub-digraph), and the cases of interactions (acyclic or cyclic) are defined with respect to the orientation of the arcs between the sub-digraphs. In this paper, a fully-participated interaction and a partially-participated interaction between D are called a cyclic-fully-participated interaction and a cyclic-partially-participated interaction between the sub-digraphs, respectively, if and only if Considering the digraph D. Moreover, structural properties can be used to determine the criticality/non-criticality of arcs in TSUIC as in its SSUIC counterpart [For SSUIC, Arbabjolfaei and Kim for an index-coding problem is represented by a graph called a confusion graph, defined as follows:For an index-coding problem modeled by a side-information digraph Definition\u00a06\u00a0(Confusion\u00a0graph).The confusion graph, denoted (i)\u00a0(ii)\u00a0Before proposing a notion of coloring for TSUIC, we first recall the standard definition of the graph coloring in the following:Definition\u00a07\u00a0(Graph\u00a0coloring\u00a0and\u00a0Chromatic\u00a0number).A proper graph coloring of a graph G is an onto function sets of independent vertices where all vertices belonging to one set are assigned with the same color in the graph coloring. Here, a set of independent vertices refers to a vertex set where any pair of vertices are not connected by an edge in independent vertex set. The tuples representing vertices within an independent vertex set are not confusable, and hence they can be coded into the same codeword. Assigning each independent vertex set a unique codeword provides us a valid index code having D with t-bit messages can be obtained by using confusion graphs. This is stated in the following theorem.Consider coloring a confusion graph Theorem\u00a01. is used for a general case, and the remaining two indices . In ourIn the form of lemmas, we discuss two-sender graph coloring of In TSUIC, the two senders encode separately, so, in the aforementioned definition, we need to assign an ordered pair of colors for each vertex, where the first color is associated with Lemma\u00a01.For any two distinct vertices Proof.\u00a0Since In a similar reasoning as in the above proof (of Lemma 1), one can prove the following lemma:Lemma\u00a02.For any two distinct vertices, If Lemma\u00a03.For any two distinct vertices, If Lemma\u00a04.For any two vertices, t-bit messages, we have the following theorem:For a TSUIC problem with Theorem\u00a02.The optimal broadcast rate for a TSUIC problem with t-bit messages isProof.\u00a0t), we getFor lting in\u03b2t=, we get\u03b2t\u2264 p2\u2032 in \u2265lity in \u2009l (see Methods for the definition), respectively, and forced knockout mutations in the order of vi and vj. We note that the gene pair which is not bidirectionally connected was excluded from analysis to remove the effect of the connectedness factor on the dynamics. We compared the average mutation-sensitivity values between them . As shown in the figure, the mutation-sensitivity of the \u2018Longer-path direction\u2019 group is significantly higher than that of the \u2018Shorter-path direction\u2019 group in all signaling networks for most time gap parameter values. In other words, the network is more sensitive when the double knockout mutation occurs in the order inducing a longer path than in the reverse order. Next, we classified every ordered gene pair into \u2018More-paths direction\u2019 and \u2018Fewer-paths direction\u2019 groups if n\u2009>\u2009n and n\u2009<\u2009n (see Methods for the definition), respectively, and forced knockout mutations in the order of vi and vj. We compared the average mutation-sensitivity values between them . As shown in the figure, the mutation-sensitivity of the former group is significantly smaller than that of the latter group in both signaling networks, almost irrespective of the time gap parameter. In other words, the network is more sensitive when the double knockout mutation occurs in the order involving fewer paths than in the reverse order. We note that our previous study showed that the dynamics influence from a gene on another gene is likely to be lessened as the path length increases and the number of paths decreases [P-values using the Mann-Whitney U test). As shown in the figure, the mutation-sensitivity of the former group is significantly smaller than that of the latter group in both signaling networks regardless of the time gap parameter. In addition, we further compared the order-specificity between the two groups . As shown in the figure, the values of \u2018Non-DT \u2192 Non-DT\u2019 and \u2018DT \u2192 DT\u2019 groups were highest and lowest, and they were the bounds for the values of other groups. Furthermore, the sensitivity of the \u2018Non-DT \u2192 DT\u2019 group was significantly higher than that of the \u2018DT \u2192 Non-DT\u2019 group, for most time gap values. Considering that these two groups are identical to each other except for the order in a gene pair, the result implies that the sensitivity difference was caused by only the mutation order. We further examined the order-specificity values of DT and Non-DT groups and found that the former is larger than the latter. This finding is interesting considering that the mutation-sensitivity of \u2018DT \u2192 DT\u2019 was smaller than that of \u2018Non-DT \u2192 Non-DT\u2019 in Fig. roscovitine before doxorubicin is synthetically lethal in breast cancer cell [Some previous studies have investigated the characteristics of drug-target genes through network-based structural analysis , 55, andcer cell and the cer cell . In addi\u03a9, \u2018TSG \u2192 OCG\u2019 and \u2018OCG \u2192 TSG\u2019. For every ordered pair of genes, we computed the mutation-sensitivity after forcing double knockout mutations according to the order of gene pair. Then, we compared the average mutation-sensitivity value between those two groups . As shown in the figure, the mutation-sensitivity value of the former group was significantly smaller than that of the latter group in all signaling networks, almost irrespective of the time gap. In other words, the network is more sensitive when oncogenes were mutated before tumor suppressors than the reverse order. In addition, we further compared the order-specificity between two groups, \u2018TSG\u2019 and \u2018OCG\u2019, and found that the order-specificity values of the former group were smaller than the latter group, almost irrespective of the time gap . This finding can be also related to some previous studies on the ordered mutations between oncogenes and tumor suppressor genes. For example, the double mutation in the order of TP53 and NOTCH, which are representative tumor-suppressor and oncogenes, respectively, was frequently observed in early stage of esophageal carcinoma patients [RAS, which is another oncogene, before loss of P53 formed a malignant tumor with metastatic behavior, but the reverse-ordered mutation resulted in benign tumors [It is known that tumor suppressors and oncogenes perform their cellular functions jointly in tumor progressions , 60, andpatients , 64. It n tumors , 65.FigIn this study, we defined the mutation-sensitivity and the order-specificity based on a Boolean network model to unravel the effects of ordered mutations on dynamics in signaling networks. It was interesting to observe that some structural properties of signaling networks can be a good indicator to explain the dynamical behavior with respect to ordered-mutation experiments. In addition, it was shown that various functionally important genes are related to the ordered-mutation-inducing dynamic. These results can enhance the understanding of the dynamic effects of ordered double-mutations on complex dynamics of large-scale biological systems, which supports the usefulness of our approach. Despite the usefulness of our approach, there are some limitations to be discussed. In this study, we employed the random nested canalyzing function to simulate the Boolean dynamics of the molecular signaling networks. This artificial specification can be a limitation of this study, although some previous studies have proven the usefulness of the model in fitting the update rules from the real biological data , 38. AnoMany previous studies investigated ordered mutations and found statistical relations with cancer development. Recently, these studies were extended to incorporate the analysis of biological networks. However, they are limited in identifying the significance of ordered mutations because they did not focus on analysis of the network dynamics. In this regard, we quantified the ordered-mutation-inducing dynamics by defining the mutation-sensitivity and the order-specificity measures using a Boolean network model. Specifically, they represent the probability that a network converges to a different attractor by a double knockout mutation, and the probability with which a network converges to different attractors by different mutation orders, respectively. It was not rare to observe both nonzero sensitivity and specificity values in large-scale signaling networks. In addition, we examined the relationship between the structural characteristics such as the path length, the number of paths, and the feedback loop with the ordered-mutation-inducing dynamics in the signaling networks. Interestingly, they showed significant relationships, which implies that such structural properties need to be considered in experimental studies with respect to ordered-mutation experiments. Next, we investigated the ordered-mutation-inducing dynamics of various functionally important genes. The numbers of drug-targets genes were negatively correlated to the mutation-sensitivity, whereas the network was more specific to the order of mutations subject to drug-targets genes than the rest genes. In addition, we found that tumor suppressors can efficiently suppress the amplification of oncogenes when the former genes are mutated earlier than the latter genes. Taken together, our results enhance the understanding of the dynamic effects of ordered double-mutations on complex dynamics of large-scale biological systems.Additional file 1 Figure S1. Relations of structural properties with ordered-mutation-inducing dynamics in BA network. A total of 250 BA random networks with |V|\u2009=\u200950 and |A|\u2009=\u2009100 were generated. The time gap (T) was set to 1\u201310. (a) Mutation-sensitivity result with respect to the shortest path length. All pairs of nodes involving an FBL were classified into \u2018Shorter-path direction\u2019 and \u2018Longer-path direction\u2019 groups according that l\u2009<\u2009l and l\u2009>\u2009l, respectively. (b) Mutation-sensitivity result with respect to the number of paths. All pairs of nodes were classified into \u2018More-paths direction\u2019 and \u2018Fewer-paths direction\u2019 groups according that n\u2009>\u2009n and n\u2009<\u2009n, respectively. (c) Mutation-sensitivity result with respect to the FBLs. All pairs of nodes were classified into \u2018FBL\u2019 and \u2018Non-FBL\u2019 groups, according that any gene of the pair is involved by an FBL or not. (d) Order-specificity result with respect to the FBLs. All P-values were computed using the Mann-Whitney U test. Table S1. Gene information of HCS consisting 1192 genes, including its association with drug-target, tumor suppressor, and oncogene. Table S2. Gene information of KEGG consisting 1659 genes, including its association with drug-target, tumor suppressor, and oncogene. Table S3. Gene information of TGL consisting 61 genes, including its association with drug-target, tumor suppressor, and oncogene."} +{"text": "Correction to: Inj Epidemiol 7, 46 (2020)https://doi.org/10.1186/s40621-020-00272-zInserting the word \u2018insidious\u2019 in the first sentence to stet the author\u2019s original text.Adding a \u2018s\u2019 to \u2018invite\u2019 in the last sentence so that the verb will match the singular subject of the sentence.In the original publication of this article Swanson, , the autThe publisher apologizes to the readers and authors for the inconvenience.The original publication has been corrected."} +{"text": "Glynne-Jones R, Sebag-Montefiore D, Meadows HM, et al. Best time to assess complete clinical response after chemoradiotherapy in squamous cell carcinoma of the anus (ACT II): a post-hoc analysis of randomised controlled phase 3 trial. Lancet Oncol 18: 347\u2013562017; \u2014This Article should have been published under the copyright \u201c\u00a9 The Authors. Published by Elsevier Ltd. This is an Open Access article under the CC BY license\u201d. This correction has been made to the online version as of March 2, 2017."} +{"text": "In both, we suggest and solve various definitions of ge-synthesis, corresponding to different ways a designer may want to take hopefulness into account. We show that in all variants, ge-synthesis is not computationally harder than traditional synthesis, and can be implemented on top of existing tools. Our algorithms are based on careful combinations of nondeterministic and universal automata. We augment systems that ge-realize their specifications by monitors that provide satisfaction information. In the multi-valued setting, we provide both a worst-case analysis and an expectation-based one, the latter corresponding to an interaction with a stochastic environment.We study Synthesis is the automated construction of a system from its specification: given a specification I and O of input and output signals, the goal is to construct a finite-state system that satisfies I, and responds with an assignment to the signals in O. Thus, with every input sequence, the system associates an output sequence. The system realizesocument}. At eachonments\u00a0.autonomous systems, which interact with unexpected environments and often replace human behavior, which is only expected to be good enough. Throughout the paper, we construct products of automata whose state space is O. Accordingly, the product can be minimized to include only consistent pairs. Also, since traditional-synthesis algorithms, in particular the Safraless algorithms we use, can handle automata with generalized B\u00fcchi and co-B\u00fcchi acceptance condition, we need only one copy of the product. \u00a0\u00a0\u00a0[Determinancy of thege-synthesis game]. Determinancy of games implies that in traditional synthesis, a specification I/O-realizable iff O/I-realizable This is useful, for example when we want to synthesize a transducer of a bounded size and proceed simultaneously, aiming to synthesize either a system transducer that realizes ge-synthesis, simple dualization does not hold, but we do have determinancy in the sense that I/O-realizable iff O/I-realizable. Accordingly, ge-realizable iff the environment has a strategy that generates, for each output sequence ocument}. For ge-ge-synthesis is that we do not actually know whether the specification is satisfied. In this section we describe two ways to address this drawback. The first way goes beyond providing satisfaction information and enables the designer to partition the specification into to a strong component, which should be satisfied in all environments, and a weak component, which should be satisfied only in hopeful ones. The second way augments ge-realizing transducers by flags, raised to indicate the status of the satisfaction.A drawback of ge-realizability is suitable especially in settings where we design a system that has to do its best in all environments. ge-synthesis with a guarantee is suitable in settings where we want to make sure that some components of the specification are satisfied in all environment. Accordingly, a specification is an LTL formula ge-synthesizewith guaranteege-realizes x is hopeful for Recall that ge-synthesis with guarantee problem is 2EXPTIME-complete.The LTL ge-realizes ge-synthesis. \u00a0\u00a0\u00a0Consider an LTL formula mentclass2pt{minimL that have w as a prefix. We say that a word green forL if green forL if there is L. When a system is lucky to interact with an environment that generates a green input sequence, we want the system to react in a way that generates a green prefix, and then realizes the specification. Formally, we say that a strategy green realizesL if for every x is green for L, then L.light green forL if light green forL if there is L. It is not hard to see that for ge-realizable languages, green and light green coincide. Indeed, if L is universally satisfiable and ge-realizable, then L is realizable.For a language mentclass2pt{minimge-realizability is strictly stronger than green realizability.ge-realizes a specification fge-realizes f green realizes We first prove that every strategy ge-realizable. Let q before a value for Xp is known. Likewise, no word ge-synthesis coincide for ge-realizable. \u00a0\u00a0\u00a0We continue and describe a specification that is green realizable and not ge-realizing transducer has the desired property of being also green realizing. The second has to do with our goal of providing the user with information about the satisfaction status, in particular raising a green flag whenever a green prefix is detected. By Theorem\u00a0ge-realizing transducer satisfies the specification. A naive way to detect green prefixes for a specification ge-realizing transducer, and raise the green flag whenever a green prefix is detected. This, however, requires a generation of ge-realizes the specification.Theorem\u00a0L is universally satisfiable and ge-realizable, then L is realizable. Accordingly, given a transducer ge-realizes Recall that if Given an LTL formula M follows the subset construction of x iff x is not light green, and accepts it otherwise. Note that the definition of F involves universality checking, possibly via complementation, yet no determinization is required, and the size of Let Note that once we reach an accepting state in red forL if red forL if for all L. Thus, when the environment generates x, then no matter how the system responds, L is not satisfied.A word blue forL when blue forL if there is L. Thus, when the environment generates x, the system can respond in a way that guarantees satisfaction no matter how the interaction continues.a word Green flags provide information about satisfaction. Two additional flags of interest are related to safety and co-safety properties:L can be added to a transducer that ge-realizes L. As has been the case with the monitor for green prefixes, its construction is based on applying the subset construction on an NBW for L. It is tempting to interpret an expression like y such that y, we have that x is v-hopeful forge-synthesis:ge-synthesis with a threshold, the input is an v-hopeful has satisfaction value at least v. Formally, a function ge-realizes v if for every x is v-hopeful, then In ge-synthesis, the input is an ge-realizes x is v-hopeful, then In For a value x is hopeful, then ge-realization with a threshold is not monotone, in the sense that decreasing the threshold need not lead to ge-realization. Indeed, the lower is the threshold v, the more input sequences are v-helpful difference function: The ratio function, given by some normalization to of the function There are different ways to analyze the relation between The choice of an appropriate mentclass2pt{minimn-floor building. The environment sends to the controller requests, by means of a truth assignment to n, and is 1 when the slowest request is granted immediately. Sure enough, there is no controller that attains satisfaction value 1 on all input sequences, and so ge-realizability, we can synthesize a controller that behaves in an optimal way. For example, using the difference function, we measure the performance of the controller on an input sequence x. Note that such a best performance needs a look-ahead on requests yet to come, which is indeed the satisfaction value of x. Thus, the assumption off-line controller. Accordingly, using the ratio function, we can synthesize a system with the best competitive ratio for an on-line interaction. The two approaches taken in Sect.\u00a0ge-synthesis with a threshold, we can use the function ge-synthesis (without a threshold), we can use the function ge-synthesis studied in Sects.\u00a0ge-realizable, or is not. In particular, in case the specification is not ge-realizable, synthesis algorithms only return \u201cno\u201d. In this section we add a quantitative measure also to the underlying realizability question. We do so by assuming a stochastic environment, with a known distribution on the inputs sequences, and analyzing the expected performance of the system.The setting of probability distribution over I always holds in probability random variable is then a function X has a finite image V, which is the case in our setting, its expected value is X attains. Next, consider an eventconditional expectation ofX with respect toE is X(w) to X attains when restricting to words in E, and normalizing according to the probability of E itself.For completeness, we remind the reader of some basics of probability theory. For a comprehensive reference see e.g.,\u00a0. Let \\doentclass1pt{minimahigh-quality synthesis problem Given an LTL formula a-la, indicating the announced satisfaction level. Thus, rather than talking about prefixes being green, red, or blue, we talk about them being v-green, v-red, and v-blue, for v is guarantees (in green and blue flags) or is impossible (in red ones). We can think of those as \u201cdegrees\u201d of green, red, and blue. Below, we formalize this intuition and argue that even an augmentation of a transducer that ge-realizes quantitative language over L and a word v-green forL if v-realizable. That is, there is a transducer v-green forL if there is v-green for L. Thus, when the environment generates x, the system can respond in a way that would guarantee v-realizability. Finally, we say that L is green realizable if there is a strategy v and for every input v-green for L, we have that v-green for L. It is not hard to see that Theorem\u00a0A ge-synthesis. Our complexity results are tight and show that ge-synthesis is not more complex than traditional synthesis. In practice, however, traditional synthesis algorithms do not scale well, and much research is devoted for the development of methods and heuristics for coping with the implementation challenges of synthesis. A natural future research direction is to extend these heuristics and methods for ge-synthesis. We mention here two specific examples.We introduced and solved several variants of GR(1) fragment\u00a0[ge-synthesis, is not handled by its known algorithms, and is an interesting challenge. The success of SAT-based model-checking have led to the development of SAT-based synthesis algorithms[ge-synthesis. A related effort is bounded synthesis algorithms[Efficient synthesis algorithms have been developed for fragments of LTL. Most noragment\u00a0, which sgorithms, where tgorithms, 24, whegorithms."} +{"text": "The One-to-one Pickup and Delivery Problem with Shortest-path Transport along Real-life Paths (OPDPSTRP) is presented in this paper. It is a variation of the One-to-one Pickup and Delivery Problem (OPDP), which is common in daily life, such as the Passenger Train Operation Plans (PTOP) and partial Taxi-sharing Problem. Unlike the classical OPDP, in the OPDPSTRP, (1) each demand must be transported along the shortest path according to passengers/shippers requirements, and (2) each vehicle should travel along a real-life path. First, six route structure rules are proposed for the OPDPSTRP, and a kind of Mixed-Integer Programming (MIP) models is formulated for it. Second, A Variable Neighborhood Descent (VND), a Variable Neighborhood Research (VNS), a Multi-Start VND (MS_VND) and a Multi-Start VNS (MS_VNS) with five neighborhood operators has been developed to solve the problem. Finally, The Gurobi solver, the VND, the VNS, the MS_VND and the MS_VNS have been compared with each other by 84 random instances partitioned in small size connected graphs, medium size connected graphs and large size connected graphs. From the test results we found that solutions generated by these approaches are often comparable with those found by the Gurobi solver, and the solutions found by these approaches are better than the solutions found by the Gurobi solver when solving instances with larger numbers of demands. In almost all instances, the MS_VND significantly outperforms the VND and the VNS in terms of solution quality, and outperforms the MS_VNS both in terms of solution quality and CPU time. In the instances with large numbers of demands, the MS_VND is still able to generate good feasible solutions in a reasonable CPU time, which is of vital practical significance for real-life instances. Nowadays, a China high-speed rail network has been formed, it is an urgent problem to design the Passenger Train Operation Plans (PTOP) based on networks, which is different from the general PTOP based on lines. Generally, there are two features in the PTOP that (1) passengers should be transported through the shortest path and (2) trains cannot visit any station more than once. So the PTOP based on networks can be refined as: There are several pickup-delivery demands (pd-pairs) and vehicles in a real-life connected graph. Each pd-pair chosen must be transported through the shortest path from the pickup point to the delivery point according to passenger/shipper requirements. Each vehicle starts at a given location and ends at the final delivery point of the pd-pairs transported by the vehicle, and cannot visit (stop at or pass through) any point more than once, namely each vehicle should travel along a real-life path. Constraints, such as vehicle load capacities, vehicle travel distances, and vehicle stops, need to be considered. This problem can be addressed by introducing a set of maximum-income routes to be driven by a fleet of vehicles to serve a group of known pd-pairs. Referred to as One-to-one Pickup and Delivery Problems with Shortest-path Transport along Real-life Paths (OPDPSTRP), this can be classed under One-to-one Pickup and Delivery Problem (OPDP). Since each pd-pair must be transported along the shortest path and vehicle stops need to be considered, the OPDPSTRP will be studied based on connected graphs, which shouldn\u2019t be abstracted into complete graphs. The OPDPSTRP can also be applied in some transportation problems with these two features of the PTOP based on real-life connected graph, such as partial Taxi-sharing Problem. To the best of our knowledge, the OPDPSTRP has rarely been studied in the literature. So the model of the OPDPSTRP will be studied and efficient algorithms will be proposed for it in this paper.Section 2 presents related studies, while Section 3 studies the relationships between pd-pairs and presents the model for the OPDPSTRP. Section 4 presents a Variable Neighborhood Descent (VND), a Variable Neighborhood Research (VNS), a Multi-Start VND (MS_VND) and a Multi-Start VNS (MS_VNS) based on 5 new neighborhoods for the OPDPSTRP. Section 5 proposes a set of random instances and analyses the efficiency of the Gurobi solver, the VND, the VNS, the MS_VND and the MS_VNS for the OPDPSTRP. Finally, conclusions and future work are presented in Section 6.The OPDPSTRP belongs to the General Pickup and Delivery Problem (GPDP), which is an NP-hard problem.Many scholars have carried out research on the GPDP over the past few years, in response to numerous kinds of GPDP being applied in real-life, such as GPDP with Time Windows, Dynamic, Stochastic, Unpaired/Paired, Single/Multi vehicle, Single/Multi depot and Single/Multi commodity.Parragh et al. , 2] rev rev2] reThe GVRPPD can be further divided into two sub-classes: unpaired and paired. The first sub-class refers to situations where pickup and delivery locations are unpaired and each unit picked up can be used to fulfill the demands of any delivery customer, such as Many-to-many PDP . The seBerbeglia et al. , 10] di di10] diMost classical OPDPs are studied in complete graphs, and pickup points must be visited prior to delivery points and ends at the final delivery point of the contents transported by the vehicle, so it can be considered as a multi-depot (vehicles) problem. Most OPDP research is based on single depot, such as that reviewed by Psaraftis , 13] , in which N = {1,\u2026,n} for vertexes, E = {1,\u2026,e} for edges, P = {1,\u2026,p} for pd-pairs, and K = {1,\u2026,m} for vehicles. Each pd-pair i with demand qi yields revenue \u03c0i. Each vehicle k\u2208K has a maximum capacity Qk and a fixed cost vck. The transportation cost per unit length of vehicle k is tck. Each vehicle k has a stop cost n.In order to define the proposed the OPDPSTRP in mathematical terms, we specify an connected graph The system also obeys the following assumptions.Each pd-pair attribute is different is The total cost of each vehicle consists of constant cost, travel cost, and stop cost. In order to maximize income, not all pd-pairs need to be transported.There is only one shortest path between any two nodes in the graph.By defining the afore-mentioned problem, we hope to identify a suitable scheme to help optimize the benefit.The constants and variables used in this paper are listed in The model of the OPDPSTRP is formulated in this section, and the route structure of the OPDPSTRP will be studied for it in Section 3.4.i and j. Constraints (7) ensure that each pd-pair i is transported no more than once. Constraint (8) ensures that each vehicle is not over-loading. Constraint (9) determines whether the vehicle stops at node n or not. Constraint (10) ensures that the number of stops for each route (not including the depot/vehicle) does not exceed M. Constraint (11) determines whether edge e is traveled along or not. Constraints (12) ensure that each route with pd-pairs is assigned to one vehicle. Constraint (13) ensures that the length of each route (not including the depot/vehicle) is not longer than D. Constraints (14), (15), (16) and (17) introduce the decision variables.The objective function (1) maximizes the total profit, in which The route structure of the OPDPSTRP will be studied for the model.Definition 1: In a real-life connected graph, if all pd-pairs are transported through the shortest path in a route i starting with the first pickup point, and the route is a path, then it is defined that route i is Route-Structure-Feasible (RSF for short).Definition 2: If a RSF route stems from inserting pd-pair i into route j, then it is defined that pd-pair i can be inserted into route j according route structure. pd_R_rs_judgei,j is defined as the route structure feasibility judgement parameter of inserting pd-pair i into route j. is defined as the route structure feasibility judgement matrix of inserting (RSFJMI for short).Definition 3: If a RSF route is a combination of pd-pair i and pd-pair j, then it is defined that pd-pair i can be combined with pd-pair j according route structure. pd_combine_rs_judgei,j is defined as route structure feasibility judgement parameter of combining pd-pair i with j. is defined as the route structure feasibility judgement matrix of combining (RSFJMC for short). It is assumed that each pd-pair can combine with any vehicle, that is pd_combine_rs_judgep+1,j = 1.Definition 4: In a RSF route, if pd-pair j can be picked up not prior to vehicle/pd-pair i, then it is defined that pd-pair j can connect to vehicle/pd-pair i. The parameter connect_to_judgei,j is defined as the judgment parameter for pd-pair j connecting to vehicle/pd-pair i. It is assumed that each pd-pair can connect to any vehicle, that is connect_to_judgep+1,j = 1. One vehicle cannot connect to another vehicle.Definition 5: If pd-pair i can connect to vehicle/pd-pair i , and pd-pair j can be delivered not prior to vehicle/pd-pair i, then it is defined that pd-pair j can connect after vehicle/pd-pair i. The parameter connect_after_judgei,j is defined as the judgment parameter for pd-pair j connecting after vehicle/pd-pair i. It is assumed that each pd-pair can connect after any vehicle, that is, connect_after_judgep+1,j =1.Definition 6: The nodes traveled by vehicle k in the route combined by pd-pair i and pd-pair j can be classified into stop nodes (stopped at by vehicle k) and pass nodes (passed but not stopped at by vehicle k). pi and di are pickup point and delivery point of pd-pair i correspondingly. sn_odi,n is defined as the judgment parameter of whether pd-pair i picks-up/delivers at node n.Definition 7: The sections in the route combined with pd-pair i and pd-pair j include weighting sections (sections traveled by a vehicle with pd-pairs), and connecting sections (sections traveled by a vehicle without pd-pairs). The constants lee and lci,j are defined as their lengths.i and pd-pair j in j connects to pd-pair i or not, and connect_after_judgei,j = 1 means pd-pair j can connect after pd-pair i, so j connects after pd-pair i successfully (Definition 4 and Definition 5). According to the above research, in a route combined with more than two pd-pairs/vehicle traveled by vehicle k, the lengths of the weighting sections and connecting sections can be formulated as k stopping at node n or not can be determined by Since Rule 1: Pd-pair j can connect to vehicle/pd-pair i when connect_to_judgei,j = 1.Rule 2: Each pd-pair transported by a vehicle must connect to the vehicle, or another pd-pair that connects after the third pd-pair or vehicle only once.Rule 3: Each vehicle/pd-pair must not be connected after by more than one pd-pair.Rule 4: Each pd-pair must not connect to itself.Rule 5: There cannot be circles in any route.Rule 6: Each pd-pair not being transported should not connect to any pd-pair or vehicle.Rule 1, Rule 4, Rule 5 and Rule 6 are apparent.i1, i2 and i3 are transported by vehicle k. The vehicle route must be as follows: pd-pair i1 connects after vehicle k (namely i2 connects to pd-pair i1 (namely i3 connects after pd-pair i1 (namely Rule 2 and Rule 3. If e5+e6+e7) between points di2 and di1 may be double-counted, because pd-pair i2 does not connect after any pd-pair or vehicle. If e1+e2+e3+e4+e5+e6+e7) between point k and di1 may be double-counted, as vehicle k has been connected after by pd-pair i1 already.Take All the rules have been considered in Section 3.3 .i5 can be inserted into route 1 while pd-pair i4 cannot; Pd-pair i3 can be combined with i1 while pd-pair i4 cannot; Pd-pair i2 and i3 can connect to pd-pair ii, pd-pair i4 can connect to pd-pair i5, and pd-pair i5 can connect to pd-pair i4; pd-pair i3 can connect after pd-pair i1 and i2, pd-pair i4 can connect after pd-pair i5, each pd-pairs can connect after all vehicles; Points 1 and 6 are vehicles locations, points 2, 3, 4, 15, 14, 12, 6, 8 and 10 are stop nodes; lck1,i1 = lee1, lci1,i3 = lee22.\u03c0i = 15, vck = 1, tck = 1, and Let So Cordeau et al. presenteDefinition 3 in Section 3.4) were applied in neighborhood transform methods to ensure that each selected pd-pair can be inserted into the selected route. The evolution of RSFJPC will be studied in Section 4.3.Additionally, studies of Grimault et al. and Ho eInsert, pd-pair i is selected randomly and inserted into a new route j chosen according pd_R_rs_judgei,j = 1.In i is inserted into route j2 from route j1, the scheme showed in r3 is removed from route j1 because there is no pd-pair requiring transportation from/to r3, while structure of route j2 remains the same. By reducing the number of stops, the route has been improved.As in Spread, a pd-pair is selected and inserted into a new route as an Insert operation. Should the vehicle be overloaded, the success rate can be improved by choosing a new pd-pair i from route and transferring this into a new route j selected by pd_R_rs_judgei,j = 1, and this cycle will continue until the vehicle is no longer overloaded, or if the cyclic number k exceed the preset iterative numbers controlling value K. The task of preset value K is to control the computing time of this operation.In i1 transported through route j1 is inserted into route j2, and the new scheme is shown in r4 is deleted from route j1 because there is no pd-pair requiring transportation from/to r4, and the structure of route j2 remains the same. Since the vehicle is overloaded at section r3-r4 in route j2, pd-pair i2 is selected from route j2 and inserted into j3, and then the new scheme is obtained as r3-r4, while the structure of route j3 remains unchanged, and the scheme is finally improved.As illustrated in Point-delete starts by choosing a route at random, before isolating the point with the minimal number picking stops and delivery stops on the route, and these pd-pairs i\u2208P are subsequently inserted into different routes j selected by pd_R_rs_judgei,j = 1, thus making it possible to delete the point from the first route.Point-delete is operated on stop point r4 in route j1, pd-pair i1 and i2 are inserted into route j2 and j3 respectively, and then the new scheme is obtained as r4 is deleted from route j1 because there is no pd-pair requiring transportation from/to it, while the structure of routes j2 and route j3 remains the same. The scheme is improved due to the reduction of stop point r4.As in Route-delete, net incomes nei of each route i are computed; route k is selected according to the probability Pk = nek/\u2211nei before being deleted from the scheme. All pd-pairs transported by route k are transferred to the state of non-carried. If there is no route with non positive net incomes, then the Point-delete should be executed.In the Reassign-vehicle is an Assignment Problem (AP) in which: rvi,j = 1 means route i being transported by vehicle j, and rv_benefiti,j is defined as its income. In this strategy, vehicles are reassigned to routes to achieve the best scheme by the Gurobi solver in Matlab.Reassign-vehicle may delivers the most significant change to the solution but cannot always improve the solution and requires more computer memory and time, it\u2019s better to be chosen according to a low probability to perturb the local best solution, so a kind of Perturbation is proposed to shock the local best solution instead of Reassign-vehicle. In Perturbation, Insert, Spread, Point-delete, Rout-delete and Reassign-vehicle are chosen according to the operators choosing probabilitiesp1, p2, p3, p4 and p5 respectively.Since Insert, Spread, Point-delete, Rout-delete, and Perturbation are defined as operators opt(k) in this paper.Above all, Operators for routes construction and adjusting can be classed into three separate categories: pd-pair insertion, pd-pair deletion, and route deletion.Muelas et al. arrayed ri and ri+1 in route R is the shortest path since all pd-pairs must be transported along the shortest path. Once pd-pair k needs to be inserted into route R(r1\u2212r2\u2212r3\u2026rn), we have to find the right inserting location of pickup point pk and delivery point dk in route R first. Let ri and rj and let ri\u2212rj\u2212rk in route R. A point ri meeting the condition rj meeting the condition R. Finally, pk and dk need to be inserted into the rear of the two points respectively, and we may get three types of results as follows: both ri and rj can be found, only one of ri and rj can be found, neither ri nor rj can be found.Firstly, it is evident that the section between any two neighbor stop points ri and rj can be found:Both r\u2212p\u2212r\u2212d\u2212r\u27a2ri and rj can be found, and ij, pd-pair k are opposite to route R and cannot be inserted into it, as in If both ri or rj satisfying the criteria, it must be sure that pk or dk is in route R already, then point pk or dk need not be inserted into it repeatedly. As in r2 is dk, so R(\u2026r1\u2212r2\u2212r3\u2026) remains the same after inserting dk into it.Because there is only one shortest path between any two nodes in this paper, if there is more than one ri and rj can be found:Only one of r\u2212p\u2212r\u2212d\u27a2ri can be found, the route structure form of R' must be r\u2212p\u2212r\u2212d. Pd-pair k must be transported along the shortest path in route R', that is, k cannot be inserted into R. As in If only p\u2212r\u2212d\u2212r\u27a2rj can be found, the route structure form of R' must be p\u2212r\u2212d\u2212r. Pd-pair k must be transported along the shortest path in route R', that is, k cannot be inserted into route R. As in If only ri nor rj can be found:Neither p\u2212r\u2212r\u2212d\u27a2ri and rj can be found, and R' must be p\u2212r\u2212r\u2212d, pd-pair k is transported along the shortest path in route R', and the route structure must be feasible, that is, pd-pair k can be inserted into route R. As in If none of p\u2212d\u2212r\u2212r\u27a2ri nor rj are found, and k must be connected to the front of route R, the route structure form of R' must be p\u2212d\u2212r\u2212r to minimize the length of R'. Set path(R) is the vehicle path which consists of all pass points in route R. Pd-pair k must be transported along the shortest path and each point must not be visited more than once in route R' to ensure feasibility of the route structure, that is, k cannot be inserted into route R. As in If neither r\u2212r\u2212p\u2212d\u27a2ri nor rj are found, and k must be connected to the back of route R, and the route structure form of route R' must be r\u2212r\u2212p\u2212d to minimize the length of route R\u2019. Pd-pair k must be transported along the shortest path and each point must not be visited more than once in route R' to ensure feasibility of the route structure, that is, If neither pk or dk needs to be deleted if there is no other pd-pair that needs picking up from it or delivering to it after deleting pd-pair k from route R. Otherwise, it should remain in route R.Point For Route-delete strategy, all pd-pairs are removed and route is deleted.pd_combine_rs_judgei,j] will remain the same, while RSFJMI will change with the changes of route j. In order to update quickly, some theories will be studied as follows.At each local search move step, RSFJMC ==0}, [pd_R_rs_judge] for route k\u2019 can be updated by formula ==0}, and let L2 = {pl\u2212dl}, a set of pd-pairs transported by route k'. [pd_R_rs_judge] for route k' can be updated by formula is defined as the length between point k and point l.It is important to obtain a higher-level initial solution for such a large-scale and complex problem. In this paper, a generation method of initial solution is proposed, based on the idea of maximum saving. Algorithm 1 .Generation steps of initial solution are presented in Many Local Search (LS) meta-heuristics are studied for the VRP, such as the ALNS proposed by Ropke et al. , PisingeSince the major neighbourhood operator is pd-pair insertion and a choosing pd-pair need be inserted a fixed position in a route at each local search move step in the OPDPSTRP, solutions are not easy to be changed and always cannot be improved by only one operator obviously. That is to say that new methods which is different from traditional LS need be proposed for the OPDPSTRP. A basic VND and a basic VNS are proposed for the OPDPSTRP based on the above five neighborhoods. A new MS_VND and a new MS_VNS are developed to improve the efficiency of the proposing neighborhoods and algorithms in this paper.Algorithm 2 and Algorithm 3 .A VND and a VNS are processed to solve the OPDPSTRP, Pseudo-code of them are presented as in In order to achieve a near optimal solution of this problem, we propose a new MS_VND metaheuristic and a new Multi-Start Variable Neighborhood Search (MS_VNS) metaheuristic.Algorithm 4 .The steps of MS_VND in this paper are presented as in Algorithm 5 .The steps of MS_VNS in this paper are presented as in Algorithm 4 and Algorithm 5, the MS_VND and the MS_VNS have been improved in the following ways: (1) A Multi-start candidate solution set with size of n is acquired from the initial solution and updated to diversify the search, and parts of worse candidate solutions are replaced by the best solution according replacing proportion m if the solution is improving. (2) Five new operators are utilized to improve the solution, which is different from traditional VNS. (3) In MS_VND, each candidate solution is transformed only once at a step (Multi-start-candidate-solution and One-operator), which is different from traditional LS . (4) A new local solution inferior to the primary one can also be accepted on the basis of two hypotheses: the iterations of that the solution keep the same are more than half of the presetting value, and the evaluation of the new solution is not drastically worse than the primary one.As in Insert