Dataset Viewer
Auto-converted to Parquet
context
stringlengths
100
4.94k
A
stringlengths
100
5.96k
B
stringlengths
100
4.49k
C
stringlengths
100
4.31k
D
stringlengths
100
4.94k
label
stringclasses
4 values
(γ,Λ)𝛾Λ(\gamma,\Lambda)( italic_γ , roman_Λ ) values estimated from spiral solutions. The fact that the
2⁢Ω⁢(σ+3)/32Ω𝜎332\Omega(\sigma+3)/\sqrt{3}2 roman_Ω ( italic_σ + 3 ) / square-root start_ARG 3 end_ARG
ζ<σ2⁢γ−σ2𝜁𝜎2𝛾𝜎2\zeta<\sqrt{\frac{\sigma}{2}}\gamma-\frac{\sigma}{2}italic_ζ < square-root start_ARG divide start_ARG italic_σ end_ARG start_ARG 2 end_ARG end_ARG italic_γ - divide start_ARG italic_σ end_ARG start_ARG 2 end_ARG
2⁢Ω⁢(σ+3)/32Ω𝜎332\Omega(\sigma+3)/\sqrt{3}2 roman_Ω ( italic_σ + 3 ) / square-root start_ARG 3 end_ARG plotted against σ+2⁢ζ𝜎2𝜁\sigma+2\zetaitalic_σ + 2 italic_ζ, for results from
23⁢Ω⁢(σ+3)23Ω𝜎3\frac{2}{\sqrt{3}}\Omega(\sigma+3)divide start_ARG 2 end_ARG start_ARG square-root start_ARG 3 end_ARG end_ARG roman_Ω ( italic_σ + 3 ) against σ+2⁢ζ𝜎2𝜁\sigma+2\zetaitalic_σ + 2 italic_ζ.
A
Breast cancer is the leading cancer type in women worldwide, with an estimated 2 million new cases and 627,000 deaths in 2018. Breast cancer staging refers to the process of describing the tumor growth or spread. Accurate staging by pathologists is an essential task that will determine the patient’s treatment and his chances of recovery (prognosis). An important part of breast cancer staging is the assessment of the sentinel axillary node, a tissue commonly used for the detection of early signs of tumor spreading (metastasis). However, sentinel lymph nodes assessment by pathologists is not always easy and optimal. For instance, a retrospective survey performed in 2012 by expert pathologists requalified the status of a high proportion of sentinel nodes [2].
Fine tuning of the hyperparameters was done for the Inception V3 and the VGG19 models on the PCAM dataset. Two hyperparameters (Adam learning rate and batch size) were fine-tuned using the Keras Tuner with the hyperband algorithm. The performance is improved in the two models with an AUC of 0.95 for the Inception V3 model and an AUC of 0.96 for the VGG19 model. These performances are comparable to current state–of–the–art models for computational pathology analysis. It is within the top 5 best algorithms of the CAMELYON16 challenge [12] and is within the top 10 best models for the PCAM dataset (https://tinyurl.com/3rhk6ph6). The current best PCAM models have an AUC around 0.97 and implement rotation equivariant strategies [26, 27, 28]. Indeed, histology images are typically symmetric under rotation, meaning that each orientation is equally as likely to appear. Rotation–equivariance removes the necessity lo learn this type of transformation from the data, thus allowing more discriminative features to be learned and also reducing the number of parameters of the model.
Precise staging by expert pathologists of breast cancer axillary nodes, a tissue commonly used for the detection of early signs of tumor spreading, is an essential task that will determine the patient’s treatment and his chances of recovery. However, it is a difficult task that was shown to be prone to misclassification. Algorithms, and in particular deep learning based convolutional neural networks, can help the experts in this task by analyzing fully digitized slides of microscopic stained tissue sections. In this study, I evaluated twelve different CNN architectures and different hardware acceleration devices for breast cancer classification on two different public datasets consisting of hundreds of thousands of images. The performance of hardware acceleration devices can improve the training time by a factor of five to twelve, depending on the model used. On the other hand, increasing the convolutional depth increases the training time by a factor of four to six, depending on the acceleration device used. More complex models tend to perform better than very simple ones, especially when fully retrained on the digital pathology dataset, but the relationship between model complexity and performance is not straightforward. Transfer learning from imagenet always performs worse than fully retraining the models. Fine-tuning the hyperparameters of the model improves the results, with the best model tested in this study showing very high performance, comparable to current state–of–the–art models.
Recently, deep learning algorithms have made major advances in solving problems that have resisted the machine learning and artificial intelligence community such as speech recognition, the activity of potential drug molecules, brain circuits reconstruction and the prediction of the effects of non-coding RNA mutation on gene expression and disease [3]. Convolutional neural networks (CNNs) are a class of deep neural networks characterized by a shared-weight architecture of convolution kernels (or filters) that slide along input features and provide translation equivariant features known as feature maps. One of the main advantages of CNNs is that the network learns to optimize the filters through automated learning, requiring very little pre-processing compared to other machine learning techniques. Since their introduction in the 1990’s [4], CNNs have shown excellent performances in the most challenging visual classification tasks and are currently dominating this research field [5]. When applied to medical imaging, CNNs demonstrated excellent performance and have been successfully used for the identification of retinal diseases from fundus images [6, 7, 8], tuberculosis from chest radiography images [9, 10] and malignant melanoma from skin images [11]. CNNs have also been used for the detection of lymph node metastases in women with breast cancer in an algorithm competition known as CAMELYON16 (Cancer Metastases in Lymph Nodes Challenge), with the best models showing equal or slightly better performances than a panel of pathologists [12]. In this study, I use a dataset of more than 300,000 lymph node images derived from CAMELYON, known as the PCAM (Patch CAMELYON) dataset [13] and the IDC dataset, composed of more than 220,000 images derived from whole slide images of invasive ductal carcinoma tissue [14, 15], one of the most common forms of breast cancer. I used these datasets to characterize and analyze the performance of different CNNs network architectures and GPU accelerators, using a standard, off–the–shelf, deep learning computational library.
All the other models are CNN models that are part of the TensorFlow Keras library (Table 1). They were developed and tested by several research groups on the Imagenet Challenge, a competition with hundreds of object categories and millions of images [25]. For instance, InceptionV3 is a model created in 2015 with a very deep architecture (94 convolutional layers) that performs very well on various computer vision tasks [19]. As for most of the models available in the Keras llibrary, it is possible to load the model pre-weighted with ImageNet training weights, thus enabling transfer learning (TF). TF is a popular approach in deep learning where pre-trained models are used as the starting point on computer vision and natural language processing tasks in order to save computing and time resources. In this study I have used models both pre-trained with imagenet weights and fully re-trained with the two datasets. For the pre-trained version, only the last layers of the model are re-trained with the dataset (global average pooling layer, dense layer and final output). Of course, given that the number of training parameters is much greater in the case of the fully re-trained model, the computation time needed for training the model is also expected to be much longer. Table 1 is detailing the architecture and parameters for each of the models used in this study. Note that some CNN models could not be used with the IDC dataset because the images are smaller than the minimum size required by these models.
C
}}\right)caligraphic_G start_POSTSUBSCRIPT ESC end_POSTSUBSCRIPT = ( caligraphic_V start_POSTSUBSCRIPT ESC end_POSTSUBSCRIPT , caligraphic_E start_POSTSUBSCRIPT ESC end_POSTSUBSCRIPT ) we take all sets of resident traits that correspond to an ESC, i.e. that have stability degree strictly bigger than α𝛼\alphaitalic_α, and edges represent possible transitions to other ESCs. More precisely,
ℰESC:=assignsubscriptℰESCabsent\displaystyle{\mathcal{E}}_{\text{ESC}}:=caligraphic_E start_POSTSUBSCRIPT ESC end_POSTSUBSCRIPT :=
As vertices for the general metastability graph 𝒢ESC=(𝒱ESC,ℰESC)subscript𝒢ESCsubscript𝒱ESCsubscriptℰESC\mathcal{G}_{\text{ESC}}=\left(\mathcal{V}_{\text{ESC}},\mathcal{E}_{\text{ESC%
ESC}}(\{0\},2b)=\{4\},\quad\mathbf{v}_{\text{ESC}}(\{0\},2c)=\{4\}.bold_v start_POSTSUBSCRIPT ESC end_POSTSUBSCRIPT ( { 0 } , 2 italic_a ) = { 2 italic_a } bold_v start_POSTSUBSCRIPT ESC end_POSTSUBSCRIPT ( { 0 } , 2 italic_b ) = { 4 } , bold_v start_POSTSUBSCRIPT ESC end_POSTSUBSCRIPT ( { 0 } , 2 italic_c ) = { 4 } .
𝒱ESC:=assignsubscript𝒱ESCabsent\displaystyle{\mathcal{V}}_{\text{ESC}}:=caligraphic_V start_POSTSUBSCRIPT ESC end_POSTSUBSCRIPT :=
D
This paper delves into the intricate dynamics of a COVID-19 epidemic model augmented with non-Gaussian noise. The exploration extends beyond the deterministic facet of the model, leading to a rigorous proof of the existence and uniqueness of a non-negative global solution for the stochastic system (4). These novel findings not only contribute to the theoretical foundation but also augment and refine insights garnered from preceding studies, as perceptibly illustrated in the graphical representations shown throughout this manuscript.
From the equation (17), it is evident that ψ<ψ0𝜓subscript𝜓0\psi<\psi_{0}italic_ψ < italic_ψ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT, highlighting that the stochastic approach is inherently more realistic than its deterministic counterpart. This observation underscores the significance of considering stochastic elements in modeling the dynamics of the system, as it captures the inherent uncertainties and fluctuations that influence the course of the COVID-19 epidemic.
The deterministic analysis provides a foundational understanding, while the stochastic counterpart offers a nuanced perspective, acknowledging the inherent uncertainties and fluctuations that characterize real-world epidemiological scenarios. The demonstration of the existence and uniqueness of solutions in the stochastic framework underscores the robustness of the model in capturing the complexities of COVID-19 dynamics.
Numerical solutions of systems are invaluable in the study of epidemic models. This section presents the numerical results of our model, shedding light on how the parameters of the deterministic model (2) and the intensity of non-Gaussian noise in the stochastic model (4) impact the dynamics. We conduct numerical experiments to illustrate the extinction and persistence of the novel coronavirus, COVID-19, in both the deterministic model and its corresponding stochastic system for comparison.
Next, we aim to demonstrate that the stochastic COVID-19 model (4) possesses a unique, positive, and globally defined solution for the initial conditions (S⁢(0),I⁢(0))∈ℝ+2.𝑆0𝐼0superscriptsubscriptℝ2(S(0),I(0))\in\mathbb{R}_{+}^{2}.( italic_S ( 0 ) , italic_I ( 0 ) ) ∈ blackboard_R start_POSTSUBSCRIPT + end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT . This analysis underscores the existence and stability of the solution across the entire domain, providing a foundation for understanding the dynamics of the system.
B
\mathcal{X}_{k}).blackboard_E caligraphic_X = roman_arg roman_min start_POSTSUBSCRIPT italic_X end_POSTSUBSCRIPT ∑ start_POSTSUBSCRIPT italic_k = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT caligraphic_D ( italic_X , caligraphic_X start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ) .
Unlike the sample mean, we can have many different networks with identical topology that give the minimum. Similarly, we can define the topological variance 𝕍⁢𝒳𝕍𝒳\mathbb{V}\mathcal{X}blackboard_V caligraphic_X as follows.
The topological variance can be interpreted as the variability of graphs from the topological mean 𝔼⁢𝒳𝔼𝒳\mathbb{E}\mathcal{X}blackboard_E caligraphic_X. To compute the topological mean and variance, we only need to identify a network with identical topology as the topological mean or the topological variance.
The topological variance 𝕍⁢𝒳𝕍𝒳\mathbb{V}\mathcal{X}blackboard_V caligraphic_X of networks 𝒳1,⋯,𝒳nsubscript𝒳1⋯subscript𝒳𝑛\mathcal{X}_{1},\cdots,\mathcal{X}_{n}caligraphic_X start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , ⋯ , caligraphic_X start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT is
The sum (9) does not uniquely define networks. Like the toy example in Figure 5, we can have many topologically equivalent brain networks that give the identical distance. Thus, the average of two graphs is also not uniquely defined. The situation is analogous to Fréchet mean, which frequently does not result in a unique mean(Le and Kume, 2000; Turner et al., 2014; Zemel and Panaretos, 2019; Dubey and Müller, 2019). We introduce the concept of the topological mean for networks, defined as the minimizer according to the Wasserstein distance, mirroring how the sample mean minimizes the Euclidean distance. The squared Wasserstein distance is translation invariant such that
A
We study here the effects of the various therapeutic strategies described in the experiments (C), (D) and (E) on the different system components. We denote with the index S1subscript𝑆1S_{1}italic_S start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT the solution components of experiment (C), with S2subscript𝑆2S_{2}italic_S start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT the solution components of experiment (D), and with S3subscript𝑆3S_{3}italic_S start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT the ones of experiment (E). Starting from experiment (C), we first assess the impact of standard radio- and chemotherapy on the densities of glioma, ECs, necrotic matter, healthy tissue, and on the concentration of VEGFs. This numerical experiment is motivated by the fact that in the real case reported in Section 4.3 the patient was treated with this specific combined treatment. Figure 2 shows a scheme of this treatment schedule.
With the same choice of parameters as that employed to generate Figure 5 for experiment (A), we test the effect of the treatment plan depicted in Figure 2. The results of this experiment (C) are shown in Figure 7. The first row of Figure 7 (that also corresponds to the third row of Figure 5) represents the state of the five species at the beginning of the treatment, while the second and third rows correspond to the situation after 3 and 6 weeks, respectively. Precisely, the third row represents the situation at the end of the combined treatment. In the last row, we show the system evolution after the resting period of 10 weeks, during which no therapy is applied.
Finally, in experiment (E), we analyze the effects of the treatment schedule sketched in Figure 4. Precisely, after letting the tumor grow for 24 weeks, we apply a combined treatment of radio- and chemotherapy. The former is applied 5 days per week (from Monday to Friday) for 6 weeks, at a fractionated dose of 2222 Gy per day (total dose of 60606060 Gy), while the latter is based on temozolomide administration orally every day at a standard constant dose of 75757575 mg/m2. Then, after a resting period of 4 weeks, adjuvant anti-angiogenic therapy is applied at a standard dose of 10101010 mg/kg intravenously every 2 weeks for other 6 weeks, thus providing a total of 3 doses during the whole treatment period. With the same choice of parameters as in Figure 7 for experiment (C), we test the effects of the described combined therapy against the alternative therapy plan proposed in experiment (D). We show the differences in the evolution of the solution components for the two schedules in Figure 9. Precisely, we consider the differences between the populations indicated with index S2subscript𝑆2S_{2}italic_S start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT (referring to the schedule in Figure 3) and the one indicated with the index S3subscript𝑆3S_{3}italic_S start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT (referring to the schedule in Figure 4). Results are shown at 27 weeks (after three weeks of radio- and chemotherapy), at 30 weeks (end of this treatment), at 40 weeks (after the resting period for the case S2subscript𝑆2S_{2}italic_S start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT or after the adjuvant anti-angiogenic treatment for the case S3subscript𝑆3S_{3}italic_S start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT), and at 50 weeks, allowing for 10 more weeks without therapy for the follow-up.
Comparing Figures 5 and 7, we immediately grasp the effects of the radio- and chemotherapy on the tumor population, whose density strongly decreases during treatment, while the density of the necrotic matter increases, as this component collects the effects of the therapy on tumor, ECs, and healthy tissue. The reduction in glioma density consequently affects VEGF production, with a decrease in its expression. The impact of the treatment is also evident in the evolution of the healthy tissue. In turn, the reduction of normal tissue affects the proliferation of cancer and endothelial cells. In fact, this depends on the availability of healthy tissue and, when the latter is excessively degraded, then proliferation is impaired. This effect can be observed by comparing the last row of Figures 5 and the third row of Figure 7, which relates to the system evolution at 30 weeks (at the end of the treatment).
At the end of the treatment, we let the patient rest for 10 weeks and analyze how the tumor eventually reorganizes and evolves. Figure 3 shows the specific scheme of this treatment schedule. We also prolonged the simulations by 10 more therapy-free weeks, in order to better observe a possible tumor relapse. We consider the same choice of parameters as in Figure 7 for experiment (C) to test the effect of the described combined therapy plan and we show the differences in the evolution of the solution components in Figure 8. Precisely, we consider the differences between the species indicated with index S1subscript𝑆1S_{1}italic_S start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT (referring to the schedule in Figure 2) and those indicated with the index S2subscript𝑆2S_{2}italic_S start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT (referring to the schedule in Figure 3). Results are shown at 27 weeks (after three weeks of combined treatment), at 30 weeks (end of the treatment), at 40 weeks (after 10 weeks of no therapy), and finally at 50 weeks (after 10 more weeks without treatment).
A
In Fig. 7, an acoustic wave pulse is depicted as it propagates through a brain. Due to the distinct acoustic properties between cerebrospinal fluid (CSF) and brain tissue, the pulse experiences partial reflections at each boundary interface. On one hand, these reflections have the potential to amplify the damaging effects of the incoming wave, particularly in the vicinity of the boundary. On the other hand, these reflections also attenuate the wave’s energy, rendering the transmitted wave less harmful. The overall impact of these reflections should be beneficial.
A head impact generates a pressure wave pulse in the CSF, which being a liquid, allows only the passage of pressure or P-waves.
Conversely, when a sulcus is oriented perpendicularly to the direction of acceleration, the density difference between the brain tissue and CSF can act as a protective mechanism, as illustrated in Fig. 10. A head impact generates a force that propagates through the brain, causing adjacent brain regions to compress. This compression forces the less dense CSF out of the sulci, thereby reducing the compressive force and allowing the adjacent gyrus more time to respond, effectively mitigating the acceleration. However, if the acceleration is excessively strong, adjacent gyri may collide, resulting in cortical surface damage. This mechanism is analogous to how CSF cushions the impact between the brain and the skull [52].
In Fig. 7, an acoustic wave pulse is depicted as it propagates through a brain. Due to the distinct acoustic properties between cerebrospinal fluid (CSF) and brain tissue, the pulse experiences partial reflections at each boundary interface. On one hand, these reflections have the potential to amplify the damaging effects of the incoming wave, particularly in the vicinity of the boundary. On the other hand, these reflections also attenuate the wave’s energy, rendering the transmitted wave less harmful. The overall impact of these reflections should be beneficial.
Sulci protecting the brain against acoustic waves: A head impact generates an acoustic wave pulse that propagates through the brain. The pulse encounters interfaces between brain tissue and CSF, leading to both reflection and refraction, thereby diminishing its energy and reducing its potential for harm.
D
To further validate our model, we examined the top 20202020 novel synergistic predictions of drug-drug-cell line combinations (from the test set), ranked by the highest synergy probability from our DDoS model. For consistency with previous studies, we considered the top “false positives” of the Loewe dataset and investigated how strongly these predictions are supported by the remaining synergy scores, i.e., Bliss, ZIP and HSA. In Table 6 we list those triplets along with an indication of how many of the three scores have a positive score for each triplet. We discovered that 75%percent7575\%75 % of the top triplets have at least one other synergy score (Bliss, ZIP, HSA) supporting those predictions (i.e. confirming synergism) while in nearly 50%percent5050\%50 % of the cases, all three scores have this indication, highlighting the efficacy of our model by representing all synergy scores (i.e. less biased toward one vs. the other).
We obtained the drug combination dataset from the DrugComb (Zheng et al., 2021) database222https://drugcomb.org/ (version 1.51.51.51.5). This dataset comprises an initial set of 1,432,351 samples (i.e., Drug A, Drug B, Cell Line) combination triplets. These samples are drawn from 34 distinct studies, including notable sources such as NCI-ALMANAC (Holbeck et al., 2017b), O’NEIL (Merck) (O’Neil et al., 2016b) and CLOUD (Licciardello et al., 2017). For each of the samples, we consider the chemical structures of the drugs, the gene expression profiles of untreated cell lines, and four different synergy scores: Loewe, Bliss, HSA, ZIP (see Appendix section  B).
To further validate our model, we examined the top 20202020 novel synergistic predictions of drug-drug-cell line combinations (from the test set), ranked by the highest synergy probability from our DDoS model. For consistency with previous studies, we considered the top “false positives” of the Loewe dataset and investigated how strongly these predictions are supported by the remaining synergy scores, i.e., Bliss, ZIP and HSA. In Table 6 we list those triplets along with an indication of how many of the three scores have a positive score for each triplet. We discovered that 75%percent7575\%75 % of the top triplets have at least one other synergy score (Bliss, ZIP, HSA) supporting those predictions (i.e. confirming synergism) while in nearly 50%percent5050\%50 % of the cases, all three scores have this indication, highlighting the efficacy of our model by representing all synergy scores (i.e. less biased toward one vs. the other).
For further model evaluation, we generated four additional benchmark datasets, one for each of the four synergy scores (Loewe, Bliss, HSA, ZIP). These datasets differ from the reported benchmark dataset above, by including the additive triplets in their respective non-synergistic class (negative class), i.e., triplets which have synergy values between the two specified thresholds, −1010-10- 10 and 10101010 (see Benchmark datasets section for more details). Those datasets are vastly more imbalanced since the new non-synergistic class includes a large number of additional samples. Therefore, those resulting Loewe, Bliss, HSA and ZIP datasets include 5.3%percent5.35.3\%5.3 %, 13.8%percent13.813.8\%13.8 %, 6.5%percent6.56.5\%6.5 % and 11.9%percent11.911.9\%11.9 % positive (synergistic) labels, respectively. This imbalance would be challenging for models’ performance when using AUPR metric. Our model still outperforms the DeepSynergy baseline model on those four additional datasets as reported in Table 4.
Table 6: Case studies - Top 20 “false positives” predictions of the Loewe dataset, ranked by the model probability of synergistic outcome. The number of ∗*∗ indicates how many of the HSA, ZIP, and Bliss scores support each prediction.
D
Fig. S9 also functions as a sensitivity analysis of our results with respect to the technical characterization of U𝑈Uitalic_U (resp. γ𝛾\gammaitalic_γ): while decreasing U𝑈Uitalic_U (resp. γ𝛾\gammaitalic_γ) decreases the mixing, so that microphytoplankton could in fact be slightly more aggregated, the dominance index never gets above 0.7 at the interaction radius threshold—the results are not modified substantially. However, combination of such lower U𝑈Uitalic_U and a slightly lower interaction threshold (see Discussion) may create some intraspecific dominance in microphytoplankton too.
K⁢(r)𝐾𝑟K(r)italic_K ( italic_r ). Using its marked version, Cj⁢Ki⁢j⁢(r)subscript𝐶𝑗subscript𝐾𝑖𝑗𝑟C_{j}K_{ij}(r)italic_C start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT italic_K start_POSTSUBSCRIPT italic_i italic_j end_POSTSUBSCRIPT ( italic_r ) is the average
{\partial G}{\partial r}\right)+2\lambda C= 4 italic_π ( 2 italic_D italic_r start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT divide start_ARG ∂ italic_G end_ARG start_ARG ∂ italic_r end_ARG + italic_γ italic_r start_POSTSUPERSCRIPT 4 end_POSTSUPERSCRIPT divide start_ARG ∂ italic_G end_ARG start_ARG ∂ italic_r end_ARG ) + 2 italic_λ italic_C
One of the reasons why estimating K⁢(r)𝐾𝑟K(r)italic_K ( italic_r ), and even more so g⁢(r)𝑔𝑟g(r)italic_g ( italic_r ),
as g⁢(r)=K′⁢(r)4⁢π⁢r2𝑔𝑟superscript𝐾′𝑟4𝜋superscript𝑟2g(r)=\frac{K^{\prime}(r)}{4\pi r^{2}}italic_g ( italic_r ) = divide start_ARG italic_K start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ( italic_r ) end_ARG start_ARG 4 italic_π italic_r start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT end_ARG.
C
4. Our model identifies luminance and color selective nodes and can be readily applied to video inputs.
These results are consistent with experimental findings showing that neurons in the early visual system exhibit
In this paper, we concentrate on representation learning, leaving the topic of inference for future studies.
This is consistent with the finding that neurons in the early visual system multiplex information about multiple stimulus properties
In summary, we abstract the complex biological early visual system using four assumptions that serve as the foundation for all studies in this paper.
D
We denoted the distance in the phenotypic space between the local minimum and a distant point exhibiting the same fitness as d𝑑ditalic_d.
We measured the behavior at steady state by setting the fitness of cells overflowed from a region between the local and global minimum as zero.
For the fixation of different phenotypes from the local maximum, a daughter cell must display higher fitness than that observed at the local minimum in the limit of high selection pressure.
Then, the daughter cell’s phenotype must at least differ more than d𝑑ditalic_d from the mother cell’s phenotype.
We considered the probability that a phenotype of a daughter cell crosses the valley between two phenotypes at the local and global maxima on the fitness landscape.
B
Fig. 2: Hodge decomposition on graph having 5 nodes and 6 edges. The edge flow is decomposed into gradient, curl and harmonic components.
We decomposed individual brain networks using the Hodge decomposition. In Figure 4, the Hodge decomposition applied to average female and male brain networks is displayed. We then assessed if there are topological difference between females and males in the original connectivity (edge flow). Following the test procedure in Section 2.3, the Wasserstein distances 𝔏∞bsuperscriptsubscript𝔏𝑏\mathfrak{L}_{\infty}^{b}fraktur_L start_POSTSUBSCRIPT ∞ end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_b end_POSTSUPERSCRIPT on birth values for testing 0D topology difference and 𝔏∞dsuperscriptsubscript𝔏𝑑\mathfrak{L}_{\infty}^{d}fraktur_L start_POSTSUBSCRIPT ∞ end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_d end_POSTSUPERSCRIPT on death values for testing 1D topology difference are separately used. The permutation test conducted on both the birth set (first term) and the death set (second term) yielded
partition graphs into topologically distinct subgraphs[14, 15]. We first apply graph filtration, a technique involving the sequential removal of edges from a graph G𝐺Gitalic_G, starting with the smallest edge weight and progressing to the largest [6, 8]. We identify the birth set B⁢(G)𝐵𝐺B(G)italic_B ( italic_G ), associated with the emergence of connected components, by computing the maximum spanning tree (MST) of G𝐺Gitalic_G using Kruskal’s or Prim’s algorithms [6]. The death set D⁢(G)𝐷𝐺D(G)italic_D ( italic_G ) then consists of the edges not present in B⁢(G)𝐵𝐺B(G)italic_B ( italic_G ) (Figure 1), which consists of death values of cycles (loops) during the filtration. We perform BDD independently on both non-loop and loop flows, allowing us to characterize the topology of each component of the Hodge decomposition. To measure the topological disparities between components, we use the Wasserstein distance applied to their respective BDD. Wasserstein distance provides optimal matching that are stable to infinitesimal noise and provide robustness [16, 15].
Fig. 1: Illustration of the Hodge decomposition, which decomposes the edge flow into non-loop and loop flows. These networks are then separately subjected to birth-death decomposition to obtain the topological features.
To measure topological distance between graphs, we employ the birth-death decomposition (BDD), which
D
\eta\gamma}{u_{N}}{i^{-{\gamma}/{\alpha}}}\right\}.blackboard_E [ roman_exp { - divide start_ARG italic_η italic_γ end_ARG start_ARG italic_u start_POSTSUBSCRIPT italic_N end_POSTSUBSCRIPT end_ARG divide start_ARG 1 + italic_ϵ end_ARG start_ARG italic_ϵ end_ARG italic_i start_POSTSUPERSCRIPT - italic_γ / italic_α end_POSTSUPERSCRIPT ( roman_log ( italic_Y start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) - italic_μ start_POSTSUBSCRIPT italic_i - 1 end_POSTSUBSCRIPT + divide start_ARG 1 end_ARG start_ARG italic_α end_ARG roman_log ( italic_i ) ) } ] start_POSTSUPERSCRIPT divide start_ARG italic_ϵ end_ARG start_ARG 1 + italic_ϵ end_ARG end_POSTSUPERSCRIPT ≤ roman_exp { italic_C start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT divide start_ARG italic_η italic_γ end_ARG start_ARG italic_u start_POSTSUBSCRIPT italic_N end_POSTSUBSCRIPT end_ARG italic_i start_POSTSUPERSCRIPT - italic_γ / italic_α end_POSTSUPERSCRIPT } .
in the product topology for (𝒫n)ℕsuperscriptsubscript𝒫𝑛ℕ\left(\mathscr{P}_{n}\right)^{\mathbb{N}}( script_P start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ) start_POSTSUPERSCRIPT blackboard_N end_POSTSUPERSCRIPT to a Markov chain with initial state 𝟎nsubscript0𝑛\mathbf{0}_{n}bold_0 start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT and transition matrix given
πi∈πsubscript𝜋𝑖𝜋\pi_{i}\in\piitalic_π start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ∈ italic_π for all i∈ℕ𝑖ℕi\in\mathbb{N}italic_i ∈ blackboard_N
i) If cN→c>0→subscript𝑐𝑁𝑐0c_{N}\to c>0italic_c start_POSTSUBSCRIPT italic_N end_POSTSUBSCRIPT → italic_c > 0 as N→∞→𝑁N\to\inftyitalic_N → ∞, then (Πt(N,n))t∈ℕsubscriptsubscriptsuperscriptΠ𝑁𝑛𝑡𝑡ℕ\left(\varPi^{(N,n)}_{t}\right)_{t\in\mathbb{N}}( roman_Π start_POSTSUPERSCRIPT ( italic_N , italic_n ) end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ) start_POSTSUBSCRIPT italic_t ∈ blackboard_N end_POSTSUBSCRIPT converges weakly in the product topology for (𝒫n)ℕsuperscriptsubscript𝒫𝑛ℕ(\mathscr{P}_{n})^{\mathbb{N}}( script_P start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ) start_POSTSUPERSCRIPT blackboard_N end_POSTSUPERSCRIPT
Taking the product over i𝑖iitalic_i, and plugging in (5.2), we conclude, for all N∈ℕ𝑁ℕN\in\mathbb{N}italic_N ∈ blackboard_N,
D
This research is supported in part by Science, Technology and Innovation Commission of Shenzhen Municipality (No. WDZC20200818121348001).
For the prostate cancer diagnosis task, we curated datasets from three hospitals (Hebei-1, Hebei-2, and Nanchang) and two public sources (DiagSet-B and PANDA) for training. To evaluate the effectiveness of our approach, we tested a private hospital dataset (QHD) and a public dataset (DiagSet-A). Hebei-1 and Hebei-2 represent hospital datasets from the Hebei Province, China, while Nanchang denotes a hospital dataset from Nanchang, China. The QHD dataset, sourced from a hospital in Qinhuangdao, China, comprised 765 pathological images, 433 of which were positive. DiagSet-A, a subset of the DiagSet data, comprised 430 pathological images, of which 227 were positive.
The Research Ethics Committee of The Fourth Hospital of Hebei Medical University, China, approved this study.
To assess the accuracy of prostate cancer Gleason grading, we employed two private datasets (Hebei-1 and Hebei-2) and one public dataset (PANDA) for training purposes. In the evaluation phase, a private hospital dataset (Nanchang) was utilized. This allowed us to evaluate the performance and reliability of the proposed approach.
The experimental results for the diagnosis task on the validation set are presented in Table 5, demonstrating metrics such as AUC, F1, ACC, and Recall. As α𝛼\alphaitalic_α increases, the overall performance of the local center model improves due to the different proportions of categories in the diagnostic task. When α𝛼\alphaitalic_α is 0.05, the average ACC of the models from seven centers (Hebei-1, Hebei-2, Nanchang, DiagSet-B-1, DiagSet-B-2, PANDA-1, PANDA-2) is only 0.8169, with an average AUC of 0.9393. However, when α𝛼\alphaitalic_α is 0.5, the average ACC of these models (Hebei-1, Hebei-2, Nanchang, DiagSet-B-1, DiagSet-B-2, PANDA-1, PANDA-2) reaches 0.8592, accompanied by an average AUC of 0.9499.
B
Biological processes are determined through heterogeneous responses of single cells to external stimuli, i.e., developmental factors or drugs. Understanding and predicting the dynamics of single cells subject to a stimulus is thus crucial to enhance our understanding of health and disease and the focus of this task.
Recent developments in molecular biology, however, aim at overcoming this technological limitation. For example, Chen et al. (2022b) propose a transcriptome profiling approach that preserves cell viability. Weinreb et al. (2020) capture cell differentiation processes by clonally connecting cells and their progenitors through barcodes (see illustrative Figure in Supplement).
Most single-cell high-throughput technologies are destructive assays —i.e., they destroy cells upon measurement— allowing us to only measure unaligned snapshots of the evolving cell population. Recent methods address this limitation by proposing (lower-throughput) technologies that keep cells alive after transcriptome profiling (Chen et al., 2022b) or that genetically tag cells to obtain a clonal trace upon cell division (Weinreb et al., 2020).
To showcase SBalign’s ability to make use of such (partial) alignments when inferring cell differentiation processes, we take advantage of the genetic barcoding system developed by Weinreb et al. (2020). With a focus on fate determination in hematopoiesis, Weinreb et al. (2020) use expressed DNA barcodes to clonally trace single-cell transcriptomes over time. The dataset consists of two snapshots: the first, recorded on day 2, when most cells are still undifferentiated (see Fig. 4a), and a second, on day 4, comprising many different mature cell types (see Fig. 4b). Using SBalign as well as the baseline fsSB, we attempt to reconstruct cell evolution between day 2 and day 4, all while capturing the heterogeneity of emerging cell types. For details on the dataset, see § B.
Beyond, the recent use of SBs has been motivated by an important task in molecular biology: Cells change their molecular profile throughout developmental processes (Schiebinger et al., 2019; Bunne et al., 2022b) or in response to perturbations such as cancer drugs (Lotfollahi et al., 2019; Bunne et al., 2021). As most measurement technologies are destructive assays, i.e., the same cell cannot be observed twice nor fully profiled over time, these methods aim at reconstructing cell dynamics from unpaired snapshots.
B
(but with finite maximal size M𝑀Mitalic_M) was studied in [5, 1, 2, 15], and a very general model incorporating distributed recruitment in [4]. In [18] the well-posedness of the above problem was proven by rewriting the system in terms of characteristic coordinates.
We note that the focus in [1, 2] was on numerical approximation of solutions of the hierarchical model. On the other hand, in [15] the authors derived a formal linearisation of the model and studied regularity properties of the governing linear semigroup. A characteristic equation was also deduced for the special case when neither the growth rate g𝑔gitalic_g nor the mortality rate μ𝜇\muitalic_μ depend on the interaction variable E𝐸Eitalic_E (β𝛽\betaitalic_β on the other hand does). Note however that the linearisation and stability results established in [15] were completely formal, as the Principle of Linearised Stability has not been established for the PDE formulation (1). This is the main reason why in the current work we employ a different formulation of the model.
The organisation of the paper is as follows. In Section 2 we first present the classic PDE formulation of the model. Then we present biological assumptions underlying the model and deduce a scalar nonlinear renewal equation for the population birth rate (the so called delay formulation). In Section 3 a dynamical systems framework for the renewal equation is outlined. In Section 4 we give conditions guaranteeing the existence of a non-zero stationary birth rate. In Section 5 we apply the principle of linearised stability for delay equations [11] to prove that, for a certain two-parameter family of fertility functions, such a stationary birth rate (whenever it exists) is locally asymptotically stable. We also show that, under natural hypotheses on the ingredients, the zero stationary birth rate is a global attractor when it is the only stationary birth rate.
Indeed, we assume that the growth rate g𝑔gitalic_g of an individual of height x𝑥xitalic_x does not depend on x𝑥xitalic_x directly, but only indirectly, as it depends on the amount of light the individual receives per unit of time. We assume that the latter, in turn, is fully determined by the number E⁢(x,t)𝐸𝑥𝑡E(x,t)italic_E ( italic_x , italic_t ) of individuals that are taller than x𝑥xitalic_x (we call E𝐸Eitalic_E an interaction variable, since it mediates how the environmental condition, here light intensity, is influenced by the extant population). We assume that the per capita death rate μ𝜇\muitalic_μ and the per capita reproduction rate β𝛽\betaitalic_β only depend on the height x𝑥xitalic_x. In fact we assume that μ𝜇\muitalic_μ is constant, i.e., independent of x𝑥xitalic_x, while β𝛽\betaitalic_β is a non-decreasing function of x𝑥xitalic_x. We assume that all individuals are born with the minimal height xmsubscript𝑥𝑚x_{m}italic_x start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT and that g𝑔gitalic_g is positive (we do not impose an upper bound on height). The assumptions that μ𝜇\muitalic_μ and β𝛽\betaitalic_β do not depend on the environment E𝐸Eitalic_E allows us to derive fairly explicit stability criteria, as we will show later on.
In Appendix B the more classical formulation of the model, taking the form of a first order PDE involving non-local functionals, is studied. In particular we show that the conditions guaranteeing the existence of stationary population densities (with respect to height) coincide with the conditions guaranteeing non-trivial stationary birth rates in the delay formulation. This makes sense since both formulations model the same phenomena (although they are independently derived from biological assumptions). Such a phenomenological relation between the two formulations suggests that the stability results for the delay formulation can be translated to the PDE formulation (as indeed is done in [3]). Although this issue is not addressed rigorously in the present paper, some comments are included in the concluding remarks section.
A
We have considered the problem of structure learning of GGMs for paired data by focusing on the family of RCON models defined by coloured graphs named pdCGs. The main results of this paper provide insight into the structure of the model inclusion lattice of pdCGs. We have introduced an alternative representation of these graphs that facilitates the computation of neighbouring models. Furthermore, this alternative representation is naturally associated with a novel order relationship that has led to the construction of the twin lattice, whose structure resembles that of the well-known set inclusion lattice, and that facilitates the exploration of the search space. These results can be applied in the implementation of both greedy and Bayesian model search procedures. Here, we have shown how they can be used to improve the efficiency of stepwise backward elimination procedures. This has also made it clear that the use of the twin lattice facilitates the correct application of the principle of coherence. Finally, we have applied our procedure to learn a brain network on 36 variables. This model dimension could be regarded as somehow small, compared with the number of variables that can be dealt with by penalized likelihood methods. This is due to the fact that, as shown in Section 6, the number of pdRCON models is much larger than that of GGMs and the same is the number of neighbouring submodels that need to be identified at every step of the algorithm. Furthermore, for every model considered, the computation of the maximum likelihood estimate is not available in closed form, but it involves an iterative procedure. Efficiency improvement is object of current research and could be achieved, for instance, by both implementing a procedure that deals with candidate submodels in parallel, and a procedure for the computation of maximum likelihood estimates explicitly designed for pdRCON models.
Coloured GGMs (Højsgaard and Lauritzen, 2008) are undirected graphical models with additional symmetry restrictions in the form of equality constraints on the parameters, which are then depicted on the dependence graph of the model by colouring of edges and vertices. Equality constraints allow one to disclose symmetries concerning both the structure of the network and the values of parameters associated with vertices and edges and, in addition, have the practical advantage of reducing the number of parameters. Roverato and Nguyen (2022) introduced a subfamily of coloured GGMs specifically designed to suit the paired data problem that they called RCON models for paired data (pdRCON). They approached the problem by considering a single coloured GGM comprising the variables of both the first and the second group. In this way, the resulting model has a graph for each of the two groups and the cross-graph dependence is explicitly represented by the edges across groups; see also Ranciati et al. (2021) and Ranciati and Roverato (2023).
We have considered the problem of structure learning of GGMs for paired data by focusing on the family of RCON models defined by coloured graphs named pdCGs. The main results of this paper provide insight into the structure of the model inclusion lattice of pdCGs. We have introduced an alternative representation of these graphs that facilitates the computation of neighbouring models. Furthermore, this alternative representation is naturally associated with a novel order relationship that has led to the construction of the twin lattice, whose structure resembles that of the well-known set inclusion lattice, and that facilitates the exploration of the search space. These results can be applied in the implementation of both greedy and Bayesian model search procedures. Here, we have shown how they can be used to improve the efficiency of stepwise backward elimination procedures. This has also made it clear that the use of the twin lattice facilitates the correct application of the principle of coherence. Finally, we have applied our procedure to learn a brain network on 36 variables. This model dimension could be regarded as somehow small, compared with the number of variables that can be dealt with by penalized likelihood methods. This is due to the fact that, as shown in Section 6, the number of pdRCON models is much larger than that of GGMs and the same is the number of neighbouring submodels that need to be identified at every step of the algorithm. Furthermore, for every model considered, the computation of the maximum likelihood estimate is not available in closed form, but it involves an iterative procedure. Efficiency improvement is object of current research and could be achieved, for instance, by both implementing a procedure that deals with candidate submodels in parallel, and a procedure for the computation of maximum likelihood estimates explicitly designed for pdRCON models.
We recall, however, that, as explained in Sections 4 and 8.3, although penalized likelihood methods are considerably more efficient, their use is problematic when variables are not measured on comparable scales. Finally, we also remark that the range of application of our results does not restrict to pdRCON models. In fact, the colouring of vertices and edges of pdCGs can be associated with different types of equality restrictions, and thus to other types of graphical models for paired data for which penalized likelihood methods are not available. For instance, they could be used to identify a subfamily of RCOR models, which impose equality restrictions between specific partial variances and correlations (Højsgaard and Lauritzen, 2008).
One way to avoid the explicit exploration of the model space is by using penalized likelihood methods, which can be applied to problems of larger dimensions. Ranciati and Roverato (2023), elaborating on previous work by Ranciati et al. (2021), introduced a graphical lasso method for learning pdRCON models; see also Li et al. (2021) and Wit and Abbruzzo (2015) for applications of penalized likelihood methods to coloured graphical modelling. The applications considered in Ranciati et al. (2021) and Ranciati and Roverato (2023) concern the identification of brain networks from fMRI data and of gene networks from breast cancer gene expression data, respectively, which are contexts where variables are measured on the same scale. Indeed, the scale the variables are measured plays a relevant role in the procedures for learning pdRCON models. Højsgaard and Lauritzen (2008, Section 8) remarked that the comparison of concentration values is meaningful only when variables are measured on comparable scales, and they recommended that RCON models should be used only in this case. We note, however, that a less stringent condition is required in pdRCON models because, the fact that only twin-pairing colour classes are allowed, implies that only homologous variables need to have comparable scales, that is a condition easily satisfied in practice. Hence, pdRCON models can be meaningfully applied also when the scales of non-homologous variables are not comparable. However, in this case the application of graphical lasso methods is problematic. Indeed, on the one hand the result of the graphical lasso is not invariant to scalar multiplications of the variables and, for this reason, it is common practice to apply it to standardized data; see, among others, Hastie et al. (2015, p. 8) and Carter et al. (2024). On the other hand, as noticed by Højsgaard and Lauritzen (2008, Section 3.4), RCON models are not invariant under rescaling, in the sense that standardization will not preserve the original structure of colour classes. In Section 8, we present two applications: the first to the same fMRI data previously analysed by Ranciati et al. (2021) and Roverato and Nguyen (2022), and the second to a dataset on air quality data where non-homologous variables have different scales, and a comparison with the graphical lasso procedure of Ranciati and Roverato (2023) is carried out.
C
Table 1: Baseline characteristics of included patients. p𝑝pitalic_p values were obtained using t𝑡titalic_t-test [20].
\times_{3}U^{(3)^{T}},m=1,2,...,M,caligraphic_Y start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT = caligraphic_X start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT × start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT italic_U start_POSTSUPERSCRIPT ( 1 ) start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT end_POSTSUPERSCRIPT × start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT italic_U start_POSTSUPERSCRIPT ( 2 ) start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT end_POSTSUPERSCRIPT × start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT italic_U start_POSTSUPERSCRIPT ( 3 ) start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT end_POSTSUPERSCRIPT , italic_m = 1 , 2 , … , italic_M ,
Our model is interpretable. The highly-weighted features were detected in the left ventricle and interventricular septum in cardiac MRI. For cardiac measurements, left atrial volume (0.778/1) contributed more than left ventricular mass (0.222/1) to the prediction.
Left Atrial Volume (m⁢l2𝑚superscript𝑙2ml^{2}italic_m italic_l start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT)
_{2}\times P_{3}}\}{ caligraphic_Y start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , caligraphic_Y start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT , . . , caligraphic_Y start_POSTSUBSCRIPT italic_M end_POSTSUBSCRIPT ∈ blackboard_R start_POSTSUPERSCRIPT italic_P start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT × italic_P start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT × italic_P start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT end_POSTSUPERSCRIPT } are extracted by learning three (N=3𝑁3N=3italic_N = 3) projection matrices {U(n)∈ℝIn×Pn,n=1,2,3}formulae-sequencesuperscript𝑈𝑛superscriptℝsubscript𝐼𝑛subscript𝑃𝑛𝑛123\{U^{(n)}\in\mathbb{R}^{I_{n}\times P_{n}},n=1,2,3\}{ italic_U start_POSTSUPERSCRIPT ( italic_n ) end_POSTSUPERSCRIPT ∈ blackboard_R start_POSTSUPERSCRIPT italic_I start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT × italic_P start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT end_POSTSUPERSCRIPT , italic_n = 1 , 2 , 3 } as follows:
C
0.675¯±0.175plus-or-minus¯0.6750.175\underline{0.675}\small{\pm}0.175under¯ start_ARG 0.675 end_ARG ± 0.175
0.605¯±0.068plus-or-minus¯0.6050.068\underline{0.605}\small{\pm}0.068under¯ start_ARG 0.605 end_ARG ± 0.068
0.605¯±0.075plus-or-minus¯0.6050.075\underline{0.605}\small{\pm}0.075under¯ start_ARG 0.605 end_ARG ± 0.075
0.605¯±0.068plus-or-minus¯0.6050.068\underline{0.605}\small{\pm}0.068under¯ start_ARG 0.605 end_ARG ± 0.068
0.675¯±0.175plus-or-minus¯0.6750.175\underline{0.675}\small{\pm}0.175under¯ start_ARG 0.675 end_ARG ± 0.175
A
Importantly, we lay the foundation for future work to explore the properties of more general multi-compartment systems subject to external noise. For instance, study of the interaction between the external timescales (i.e., autocorrelation of the external input) and the internal timescales (i.e., progression through the system) is highly relevant to specific biological problems: in the virus-cell lysis problem, this interaction is relevant for immune system detection and, therefore, infection clearance. Aside from external noise, other stochastic features, including both between-cell and between-virion heterogeneity and fluctuations in the replication process itself are known to play an important role in within-host virus replication [48, 49, 10, 3]. Despite these observations, the study of multicompartment problems with the stochastic mathematical models requisite to capture important features is presently scarce, albeit a rich area for both mathematical and biological insight.
Code used to produce the results are available on GitHub at https://github.com/ap-browning/multicompartment.
The choice to study a simplified linear stochastic model allows us to formulate the multicompartment problem as a multidimensional Gaussian process, enabling us to draw on the significant body of literature devoted to study of the statistical properties of such systems [26, 27, 28] to formulate a series of analytical expressions for key statistics unique to stochastic processes including the variance, covariance, and autocorrelation function. Alternative approaches to a more theoretical analysis could include the study of system response to pure waves and input pulses; however, the goal of this work is to study the simple model directly. While we are not able to solve explicitly for the probability density function of the FPT, we present a series of numerical and approximate results that provide insight into the FPT, and the rate at which the mean FPT scales with the number of compartments in the system. We then apply our linear model to study how the behaviour or robustness biological systems can be modified through perturbations to unidirectional progression through the system. Viral replication, for example, is known to be a highly stochastic process, and progression through replication stages is very often not unidirectional [2].
Multicompartment processes are ubiquitous in biology; from linear progression through the cell cycle, to phage replication in bacteria and the propagation of viruses by hijacked cellular machinery. Our analysis demonstrates that even a fundamental linear multicompartment structure provides potential advantages and benefits to the systems that employ them. These results parallel filters and autoregressive models in control theory that allow engineers to control and exploit systems subject to noisy input [25, 44, 45].
Figure 4: First passage time distributions for linear multicompartment model. (a,d) Realisations of a three compartment system initiated using (a) the fixed initial condition and (d) the partially-fixed initial condition. Solutions are terminated at t=τ:X3⁢(τ)>a:𝑡𝜏subscript𝑋3𝜏𝑎t=\tau:X_{3}(\tau)>aitalic_t = italic_τ : italic_X start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT ( italic_τ ) > italic_a, yielding τ𝜏\tauitalic_τ as the FPT. (b,e) Distribution function for the FPT constructed from (colour) 1000 realisations of the SDE and (dashed black) a finite difference solution to eq. 21. (c,f) Mean, 2.5% quantile, and 97.5% quantile for the FPT distribution constructed from (grey) 1000 realisations of the SDE and (black) a finite difference solution to eq. 21. Shown in red-dashed is an approximation to the mean FPT constructed by scaling the FPT for ν=1𝜈1\nu=1italic_ν = 1 based on matching the second-derivative of the autocorrelation function (eq. 16). The barrier for each compartment is located at a=a~⁢σν𝑎~𝑎subscript𝜎𝜈a=\tilde{a}\sigma_{\nu}italic_a = over~ start_ARG italic_a end_ARG italic_σ start_POSTSUBSCRIPT italic_ν end_POSTSUBSCRIPT where a~=1~𝑎1\tilde{a}=1over~ start_ARG italic_a end_ARG = 1. Other parameters are fixed at θ=μ=k=1𝜃𝜇𝑘1\theta=\mu=k=1italic_θ = italic_μ = italic_k = 1, and σ=0.5𝜎0.5\sigma=0.5italic_σ = 0.5.
A
Self-referencing embedded Strings (SELFIES) improve the initial idea of SMILES for usage in machine learning processes by creating a robust molecular string representation [15]. SMILES offered a simple and interpretable characterization of molecules that was able to encode the elements of molecules and their spatial features. The spatial features rely on an overly complex grammar where rings and branches are not locally represented features. This complexity causes issues, especially in generative models, where machines frequently produce syntactically invalid or physically invalid strings. To remove this non-locality, SELFIES uses a single ring or branch symbol, and the length of this spatial feature is directly supplied; ensuring that any SELFIES string has a valid physical representation.
Elman networks, more commonly known as vanilla recurrent neural networks (RNN), attempt to introduce the concept of a time-dependent dynamic memory [16]. The idea is to make predictions about inputs based on contextual information. Context-based predictions can be done for four input-output schemes: one-to-one, one-to-many, many-to-one, and many-to-many. One-to-one models are a variation of a classic neural network, one-to-many models are best for image caption generation, many-to-one models are best for sentiment analysis, and many-to-many models are best for translation or video frame captioning. Fig.  1 is an example of the basic structure of a vanilla RNN.
The available MoleculeNet benchmark [9] uses SMILES for its molecular representation. After reviewing some of the molecule strings, not all are canonical. Including non-canonical SMILES is problematic as SMILES grammar is already complex; the molecules are converted to RDKit’s canonical form to reduce complexity. The next issue is caused by RNNs, one of the many advantages of RNN is the allowance of variable length inputs to account for a variable length of history. This is only true theoretically; in practice, RNN memory has limits, which is the focus of many newer works [27]. Despite this limitation, it has been recently shown that RNNs can handle input lengths of around 45-50 before the performance begins to degrade [28, 29]. Using this knowledge, we set a maximum SMILES length of 46 for the molecules. The limitation keeps a minor majority of the molecules while allowing us to ensure the RNN is performing well. After limiting the SMILES molecular length, the SMILES are converted to SELFIES. The intention of converting SMILES to SELFIES is to reduce the grammar complexity and simplify the learning process of the RNN. SELFIES converts each element and structural component, such as rings or branches, into their label. These labels are then encoded into a numerical value based on their dictionary index.
Unfortunately, Vanilla RNNs suffer from memory saturation issues, so they are not always reliable. There have been many methods proposed to overcome this issue, but one of the most popular is the Gated Recurrent Unit (GRU)[17]. The basic structure of a GRU is in Fig.  2. We can mathematically describe each of the components using Equation  3, Equation  4, Equation  5, and Equation  6. Equation  5 represents the candidate hidden state function, representing the potential updated state. Equation  6 performs the actual update to the hidden state based on the previous hidden state and the candidate hidden state. Both Equation  3 and Equation  4 allow the network to tune the importance of the contribution of the previous hidden state to the new hidden state. Because of the rtsubscript𝑟𝑡r_{t}italic_r start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT and ztsubscript𝑧𝑡z_{t}italic_z start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT parameters, the GRU can better control its memory state offering a practical improved performance over RNNs.
Before training on the selected MoleculeNet datasets referenced in Section II-A, we perform an additional reduction to the dataset by setting the lower bound of 31 molecules to the SMILES string allowing for the search space to remain sufficiently complex while reducing the overall run time. The lower bound reduces the datasets before stratified splitting the data using 80% for training and 20% for testing [33]. The stratified splitting intends to maintain the known sample rate of a given side effect to model real-world testing. However, during training, we want to remove the sampling bias to ensure our model accurately learns the causes of a side effect. The minority samples within the training set are duplicated to have an even sample count between the side effect present and the side effect not present to reduce the sampling bias. After replicating training samples, the SMILES conversion to SELFIES occurs. Typical natural language processing (NLP) methods use a word, sub-word, or character tokenization to convert strings into numerical values, but we opt for a slightly different method, which we explain by referring to equation  7. It shows the SELFIES representation of benzene where each molecule and structural element are between brackets. Using this representation, we decide to tokenize based on each set of brackets that exist within the SELFIES converted dataset. This results in a total of 47 unique values. After tokenizing the SELFIES, the embedding dimension, input dimension of the RNN, and the hidden dimension of the RNN are set to a size of 47 to match the dimensional space of the tokens. To give the RNN model the best opportunity to make accurate classifications, we use a single model to perform a single side effect classification prediction. For SIDER, instead of predicting all 27 potential side effect classifications, we opt to predict 20 side effect classifications due to extreme imbalances present in the side effect data. The vanilla RNN architecture results in a model with 11.5K parameters and the GRU architecture results in a model with 18.8k parameters. Both train in under 2 minutes on an Nvidia GeForce RTX 3090. To compare our performance with other works that use MoleculeNet we evaluate using the suggested metric, the receiver operating characteristic curve (ROC) [1, 34].
A
When species share identical niches, they cannot coexist, which is known as the competitive exclusion principle (CEP).
One such example is observed in the ocean with phytoplankton, known as the paradox of the plankton [9].
The ecological niche refers to all the environmental factors required for a species’ survival, such as resources and habitat.
Various mechanisms, such as temporal fluctuations and spatial heterogeneity, have been proposed to resolve this paradox.
To explain the biodiversity exceeding the bound that the CEP predicts, temporal environmental fluctuations, spatial heterogeneity, and other mechanisms have been proposed [9, 13, 14] and provided successful explanations.
C
In this section, we derive lower bounds on the slowest growth V⁢(0)=r⁢u𝑉0𝑟𝑢V(0)=ruitalic_V ( 0 ) = italic_r italic_u and decay V⁢(0)=u𝑉0𝑢V(0)=uitalic_V ( 0 ) = italic_u processes to compare with the upper bounds.
We have conjectured and proved upper bounds for individual graphs on T¯⁢(r)¯T𝑟\mathrm{\overline{T}}(r)over¯ start_ARG roman_T end_ARG ( italic_r ) for decay processes in (11) and (13) and proved an upper bound for growth on connected graphs in the limit R0↓1↓subscript𝑅01R_{0}\downarrow 1italic_R start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT ↓ 1 in Lemma 5. We combine the results and define an upper bound T^c⁢(r,G)subscript^𝑇𝑐𝑟𝐺\hat{T}_{c}(r,G)over^ start_ARG italic_T end_ARG start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT ( italic_r , italic_G ) on T¯⁢(r)¯T𝑟\mathrm{\overline{T}}(r)over¯ start_ARG roman_T end_ARG ( italic_r ) for a connected graph, specified by the subscript ‘c𝑐citalic_c’:
In this work, a first step towards the analysis of the intermediate regime is presented. We define the upper-transition time T¯⁢(r)¯T𝑟\mathrm{\overline{T}}(r)over¯ start_ARG roman_T end_ARG ( italic_r ), a threshold quantity which characterizes the border of the intermediate regime and the quenched regime, in which the network is approximately static. In an analysis of an SIS epidemic, this threshold quantity T¯⁢(r)¯T𝑟\mathrm{\overline{T}}(r)over¯ start_ARG roman_T end_ARG ( italic_r ) can determine whether a network can be assumed to be static. Indeed, when the inter-update time Δ⁢tΔ𝑡\Delta troman_Δ italic_t is larger than T¯⁢(r)¯T𝑟\mathrm{\overline{T}}(r)over¯ start_ARG roman_T end_ARG ( italic_r ), the epidemic process can, in most situations, be accurately predicted using the quenched approximation. We show that for fixed infection rate β𝛽\betaitalic_β, curing rate δ𝛿\deltaitalic_δ and initial state vector V⁢(0)𝑉0V(0)italic_V ( 0 ), but with different graphs, the basic reproduction number R0subscript𝑅0R_{0}italic_R start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT determines the upper-transition time T¯⁢(r)¯T𝑟\mathrm{\overline{T}}(r)over¯ start_ARG roman_T end_ARG ( italic_r ). We derive upper and lower bounds for the upper-transition time T¯⁢(r)¯T𝑟\mathrm{\overline{T}}(r)over¯ start_ARG roman_T end_ARG ( italic_r ) in (25), (26) and (28), and compare them in Fig. 9 to numerical estimations of T¯⁢(r)¯T𝑟\mathrm{\overline{T}}(r)over¯ start_ARG roman_T end_ARG ( italic_r ). We introduce the derivative convergence time t∗⁢(r∗)superscript𝑡∗superscript𝑟∗t^{\ast}(r^{\ast})italic_t start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT ( italic_r start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT ) in Appendix C, to upper bound the upper-transition time T¯⁢(r)¯T𝑟\mathrm{\overline{T}}(r)over¯ start_ARG roman_T end_ARG ( italic_r ) and we argue that t∗⁢(r∗)superscript𝑡∗superscript𝑟∗t^{\ast}(r^{\ast})italic_t start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT ( italic_r start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT ) is easier to determine numerically, although the computation time saved varies depending on how one determines the steady-state prevalence y∞subscript𝑦y_{\infty}italic_y start_POSTSUBSCRIPT ∞ end_POSTSUBSCRIPT.
Inequality (17) allows us to deduce a non-trivial lower bound for T¯⁢(r)¯T𝑟\mathrm{\overline{T}}(r)over¯ start_ARG roman_T end_ARG ( italic_r ) in growth processes:
Lemma 5 (Upper bound on T¯⁢(r)¯T𝑟\mathrm{\overline{T}}(r)over¯ start_ARG roman_T end_ARG ( italic_r ) for growth).
C
-1}(V_{i})\right]|_{V=g(u)}\,.over˙ start_ARG italic_u end_ARG start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT = over^ start_ARG italic_X end_ARG start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ( italic_V ) | start_POSTSUBSCRIPT italic_V = italic_g ( italic_u ) end_POSTSUBSCRIPT = [ ∇ start_POSTSUBSCRIPT italic_i italic_j end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT roman_Φ ( italic_V ) italic_V start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT - italic_g start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT ( italic_V start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) ] | start_POSTSUBSCRIPT italic_V = italic_g ( italic_u ) end_POSTSUBSCRIPT .
By analogy with the constant matrix case, we reformulate the Jacobi function (11) and the energy function (8) starting from the new Lagrangian Φ⁢(V)Φ𝑉\Phi(V)roman_Φ ( italic_V ) in (13). Indeed, the Jacobi function (11) is now
We observe that if we define the function ΦA⁢(V)subscriptΦ𝐴𝑉\Phi_{A}(V)roman_Φ start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT ( italic_V ) as the dominant term of the energy:
so we recognize that it can be thought—borrowing standard arguments from analytical mechanics—also coinciding with the Jacobi function 𝒥Asubscript𝒥𝐴{\cal J}_{A}caligraphic_J start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT related to the Lagrangian ΦAsubscriptΦ𝐴\Phi_{A}roman_Φ start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT:
The authors in [8, 9] extend the classical Hopfield model by devising a more general Lagrangian Φ⁢(V)Φ𝑉\Phi(V)roman_Φ ( italic_V )
A
To show the forward direction, suppose that N𝑁Nitalic_N is not a tree-child network. We will show that there must be a spanning tree for N𝑁Nitalic_N that is not a support tree. If N𝑁Nitalic_N is not tree-child, then it has at least one vertex that is not visible. Let v𝑣vitalic_v be a non-visible vertex that is maximally distant from the root, so that all vertices descended from v𝑣vitalic_v are visible. If we delete each arc out of v𝑣vitalic_v, then there is a path from the root to each vertex, so N𝑁Nitalic_N has a spanning tree T𝑇Titalic_T. However, in this tree, T𝑇Titalic_T has v𝑣vitalic_v as a leaf. The tree T𝑇Titalic_T is therefore a spanning tree of N𝑁Nitalic_N and not all its leaves are in X𝑋Xitalic_X, so T𝑇Titalic_T is not a support tree.
An important property of tree-child networks is that all of their vertices are visible [3, Lemma 2]. A vertex v𝑣vitalic_v in a network is visible if there is a leaf x𝑥xitalic_x for which every path from the root to x𝑥xitalic_x passes through v𝑣vitalic_v. In this section, we show how visibility can be interpreted by using covers, beginning with the definition of the backtrack of a label in a cover.
Normal networks are a subclass of the tree-child networks, with the added constraint that they contain no “shortcuts” [20]. A shortcut is an edge (u,v)𝑢𝑣(u,v)( italic_u , italic_v ) for which there is an alternative directed path from u𝑢uitalic_u to v𝑣vitalic_v in the network.
Suppose N𝑁Nitalic_N has a shortcut. Then there is a vertex x𝑥xitalic_x with a non-trivial path from some vertex v𝑣vitalic_v to x𝑥xitalic_x, and there is also an edge (v,x)𝑣𝑥(v,x)( italic_v , italic_x ). The existence of a non-trivial path from v𝑣vitalic_v to x𝑥xitalic_x means that the cover has a non-trivial backtrack from x𝑥xitalic_x, which includes the children of v𝑣vitalic_v as a set. However, x𝑥xitalic_x is also a child of v𝑣vitalic_v, so x𝑥xitalic_x is in a set in the backtrack.
Let v𝑣vitalic_v be the label of the parent of S𝑆Sitalic_S. Then x𝑥xitalic_x is a child of v𝑣vitalic_v, meaning there is an edge (v,x)𝑣𝑥(v,x)( italic_v , italic_x ) in N𝑁Nitalic_N. However, the backtrack provides a non-trivial path in N𝑁Nitalic_N from v𝑣vitalic_v to x𝑥xitalic_x through S𝑆Sitalic_S. That is, N𝑁Nitalic_N contains a shortcut.
B
Let T>0𝑇0T>0italic_T > 0 be the present. For comparison with our results for constant population size, we keep the same definitions of θ1subscript𝜃1\theta_{1}italic_θ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT, θ2subscript𝜃2\theta_{2}italic_θ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT and α𝛼\alphaitalic_α and we set ρ⁢(T)=1𝜌𝑇1\rho(T)=1italic_ρ ( italic_T ) = 1. Thus, N𝑁Nitalic_N is the population size at the present time T𝑇Titalic_T, and θ1subscript𝜃1\theta_{1}italic_θ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT, θ2subscript𝜃2\theta_{2}italic_θ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT and α𝛼\alphaitalic_α are the present-day values of these variables. The corresponding values at some other time t𝑡titalic_t are N⁢ρ⁢(t)𝑁𝜌𝑡N\rho(t)italic_N italic_ρ ( italic_t ), θ1⁢ρ⁢(t)subscript𝜃1𝜌𝑡\theta_{1}\rho(t)italic_θ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT italic_ρ ( italic_t ), θ2⁢ρ⁢(t)subscript𝜃2𝜌𝑡\theta_{2}\rho(t)italic_θ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT italic_ρ ( italic_t ) and α⁢ρ⁢(t)𝛼𝜌𝑡\alpha\rho(t)italic_α italic_ρ ( italic_t ). The demographic function ρ𝜌\rhoitalic_ρ could for example represent exponential population growth, in which case ρ⁢(t)=ρ⁢(0)⁢eβ⁢t𝜌𝑡𝜌0superscript𝑒𝛽𝑡\rho(t)=\rho(0)e^{\beta t}italic_ρ ( italic_t ) = italic_ρ ( 0 ) italic_e start_POSTSUPERSCRIPT italic_β italic_t end_POSTSUPERSCRIPT for some positive constant β𝛽\betaitalic_β. This model was used in Wakeley et al. (2023) to illustrate the effects of rapid growth on neutral rare variation in humans. Here we allow that ρ⁢(t)𝜌𝑡\rho(t)italic_ρ ( italic_t ) is piecewise continuous. As will become clear, the key feature of ρ𝜌\rhoitalic_ρ for our results is that it is continuous at T𝑇Titalic_T.
Suppose we are given a sample from the selected locus at the present time t=0𝑡0t=0italic_t = 0, and that we know the allelic types of the sample but we do not know how the sample was produced. What is the genealogy of the sample? This question was answered by Barton et al. (2004), who modeled the ancestral process using the structured coalescent with allelic types as subpopulations. The structured coalescent can be a model of subdivision with migration between local populations (Takahata, 1988; Notohara, 1990; Herbots, 1997) or a model of selection with mutation between allelic types (Kaplan et al., 1988; Darden et al., 1989). For samples from a population at stationarity as in Section 1, Barton et al. (2004) proved that this could be done rigorously starting with a Moran model with finite N𝑁Nitalic_N then passing to the diffusion limit. Barton and Etheridge (2004) explored some properties of gene genealogies under this model, and Etheridge et al. (2006) used the same idea to describe genetic ancestries following a selective sweep.
In this paper, we have considered a two allele model at a single genetic locus subject to recurrent mutation and selection in a large haploid population with possibly time-varying size. We assumed that a sample of size n𝑛nitalic_n was drawn uniformly from an infinite population under the diffusion approximation. By extending the framework of Barton et al. (2004), we described the asymptotic behaviors of the conditional genealogy and the number of latent mutations of the sample, given the sample frequencies of the two alleles. This moves beyond what is in Wakeley et al. (2023) by the inclusion of selection and by the use of an entirely different model, i.e. coalescence in a random background (Barton et al., 2004). This yields novel results. For example, in the strong selection case in which the selection strength α𝛼\alphaitalic_α is proportional to the sample size n𝑛nitalic_n and both go to infinity (our scenario (iii)), the genealogy of the rare allele can be described in terms of a Cox-Ingersoll-Ross (CIR) diffusion with an initial Gamma distribution.
For a population with time-varying size ρ⁢(t)⁢N𝜌𝑡𝑁\rho(t)Nitalic_ρ ( italic_t ) italic_N at forward time t𝑡titalic_t where ρ𝜌\rhoitalic_ρ is a non-constant function, neither the Moran process nor its diffusion approximation possess a stationary distribution. However, the random background approach of Barton et al. (2004) can be generalized to this setting by considering the time-reversed frequency process.
Since the random background approach of Barton et al. (2004) was formulated based on the lineage dynamics of the Moran model, we begin by describing the diffusion process arising from a Moran model with time-varying population size.
D
Remember that datasets A-G (Table 2) represent more or less “clean” data with very little noise, with an exception of E where a notion of noise is introducing by sampling success probabilities from the ℬ⁢ℯ⁢𝓉⁢𝒶ℬℯ𝓉𝒶\mathpzc{Beta}italic_script_B italic_script_e italic_script_t italic_script_a distribution (which is still arguably a “clean” dataset as it is sampled from exactly the 𝒟⁢𝒩⁢ℳ𝒟𝒩ℳ\mathpzc{DNM}italic_script_D italic_script_N italic_script_M distribution), therefore one could expect a great performance of conventional models. Interestingly, the binomial and beta-binomial approaches fare better than their counterparts as implemented in MIXALIME. The most likely explanation for that is that MIXALIME employs a local maximum likelihood/windowed approach, which results in less data being used for each parameter estimate and in estimating fluctuating around the true value, whereas in the clasical binomial and beta-binomial approaches, whole dataset is used once for the parameter estimation. On the other hand, the more the true joint distribution deviates from the classical multinomial models, the more the MIXALIME approach starts to shine (groups H-L, Table 3).
We rely on the automatic differentiation framework JAX (Bradbury et al., 2018) to obtain an analytical gradient of the log likelihood function 10. This, obviously, requires PMF of the model in Equation 4 to be differentiable in the first place. This condition is met if we compute G⁢(l)𝐺𝑙G(l)italic_G ( italic_l ) straightforwardly according to the definition: G⁢(l)=∑n=0lf⁢(n)𝐺𝑙superscriptsubscript𝑛0𝑙𝑓𝑛G(l)=\sum_{n=0}^{l}f(n)italic_G ( italic_l ) = ∑ start_POSTSUBSCRIPT italic_n = 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_l end_POSTSUPERSCRIPT italic_f ( italic_n ). However, as the truncation boundary l𝑙litalic_l increases, increases the computational burden: note that each computation of f𝑓fitalic_f, both in the case of the Negative Binomial f𝒩⁢ℬsubscript𝑓𝒩ℬf_{\mathpzc{NB}}italic_f start_POSTSUBSCRIPT italic_script_N italic_script_B end_POSTSUBSCRIPT and the Beta Negative Binomial gℬ⁢ℯ⁢𝓉⁢𝒶⁢𝒩⁢ℬsubscript𝑔ℬℯ𝓉𝒶𝒩ℬg_{\mathpzc{BetaNB}}italic_g start_POSTSUBSCRIPT italic_script_B italic_script_e italic_script_t italic_script_a italic_script_N italic_script_B end_POSTSUBSCRIPT require evaluations of the Euler’s Gamma functions ΓΓ\Gammaroman_Γ and Beta functions B𝐵Bitalic_B:
Remember that datasets A-G (Table 2) represent more or less “clean” data with very little noise, with an exception of E where a notion of noise is introducing by sampling success probabilities from the ℬ⁢ℯ⁢𝓉⁢𝒶ℬℯ𝓉𝒶\mathpzc{Beta}italic_script_B italic_script_e italic_script_t italic_script_a distribution (which is still arguably a “clean” dataset as it is sampled from exactly the 𝒟⁢𝒩⁢ℳ𝒟𝒩ℳ\mathpzc{DNM}italic_script_D italic_script_N italic_script_M distribution), therefore one could expect a great performance of conventional models. Interestingly, the binomial and beta-binomial approaches fare better than their counterparts as implemented in MIXALIME. The most likely explanation for that is that MIXALIME employs a local maximum likelihood/windowed approach, which results in less data being used for each parameter estimate and in estimating fluctuating around the true value, whereas in the clasical binomial and beta-binomial approaches, whole dataset is used once for the parameter estimation. On the other hand, the more the true joint distribution deviates from the classical multinomial models, the more the MIXALIME approach starts to shine (groups H-L, Table 3).
MIXALIME is written in the Python programming language. We took advantage of the autodifferentiation and just-in-time compilation provided by the JAX framework and we used optimization routines present in the scipy package. For reading and processing input datasets we rely on a combination of datatable, pandas (pandas development team, 2020) and pysam packages. Implementation-wise, most of the math is done in a separate package named betanegbinfit (for the sake of possible usage outside of the task of identifying allele-specific events), whereas the MIXALIME packages itself is more of a wrapper around it.
To test various MIXALIME models performance, we evaluated different models and methods on the series of synthetic datasets generated by our testing framework (see Appendix I for implementation details). We generated 86 synthetic sets of varying configurations (i.e. parameters that were passed to the generator, see the list below for available parameters of interest) as present in Table 2 and Table 3 To evaluate performance of models, we used sensitivity and specificity metrics. Also, each dataset was resampled with a different random seed 20 times to obtain both mean and standard deviations of PR AUC, sensitivity and specificity metrics. Parameters, available to the generator, are:
C
We recall that (λ*⁢(t),σ*⁢(t),A*⁢(t),C*⁢(t))superscript𝜆𝑡superscript𝜎𝑡superscript𝐴𝑡superscript𝐶𝑡(\lambda^{*}(t),\sigma^{*}(t),A^{*}(t),C^{*}(t))( italic_λ start_POSTSUPERSCRIPT * end_POSTSUPERSCRIPT ( italic_t ) , italic_σ start_POSTSUPERSCRIPT * end_POSTSUPERSCRIPT ( italic_t ) , italic_A start_POSTSUPERSCRIPT * end_POSTSUPERSCRIPT ( italic_t ) , italic_C start_POSTSUPERSCRIPT * end_POSTSUPERSCRIPT ( italic_t ) ) is the solution to
σiN⁢(t)subscriptsuperscript𝜎𝑁𝑖𝑡\sigma^{N}_{i}(t)italic_σ start_POSTSUPERSCRIPT italic_N end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ( italic_t ). We start by describing the dynamics of the epidemic in the
the McKean–Vlasov equation (4). Then, if I⁢(t,⋅)𝐼𝑡normal-⋅I(t,\cdot)italic_I ( italic_t , ⋅ ) (resp. S⁢(t,⋅)𝑆𝑡normal-⋅S(t,\cdot)italic_S ( italic_t , ⋅ )) is the density of A*⁢(t)superscript𝐴𝑡A^{*}(t)italic_A start_POSTSUPERSCRIPT * end_POSTSUPERSCRIPT ( italic_t ) on the
We start by deriving the equation for I⁢(t,⋅)𝐼𝑡⋅I(t,\cdot)italic_I ( italic_t , ⋅ ). Let us compute,
requires to study the variations of the function Fesubscript𝐹eF_{\mathrm{e}}italic_F start_POSTSUBSCRIPT roman_e end_POSTSUBSCRIPT. Let us start
C
Second, in the white noise limit, the DMFT leads to the stochastic logistic model, a phenomenological model that proved to be consistent with several macro-ecological laws in microbial ecosystems [4, 40].
In particular, the analytical species abundance distribution derived from the DMFT follows the Gamma distribution, a widely utilized probability distribution in macroecology [32, 1]. Again, similar truncated fat-tailed distribution has been recently shown in the chaotic phase [36] and in the strongly interacting limit [5, 41] of the QGLV with immigration.
Facilitating species coexistence through cyclic fluctuations is a mechanism that has also been observed in the chaotic phase of the QGLV [19, 35, 36].
In the global equilibrium phase, the biodiversity of the QGLV model is limited by the stability-diversity paradox [27, 28, 29, 17]. Moreover, the species abundance distribution (SAD), as obtained in the limit of a large number of species within the dynamical mean field theory (DMFT) or the cavity method, is a truncated Gaussian [20, 18], very different from the heavy tail SAD observed in empirical microbial [30, 31, 4] or in forest[32] communities.
Firstly, the introduction of annealed disorder in the GLV equations, for any finite correlation time, has exerted a substantial positive influence on the biodiversity of the system. Specifically, when the dynamics of the system converge to the stationary distribution, we observe the quasi-cycles of species populations dynamics, where species abundances alternate between high and low values, favoring the coexistence of all species (if we do not artificially introduce any minimal threshold under which we consider the species extinct). This is, in fact, a similar outcome to what QGLV models found in the chaotic phase [19, 36] when introducing an immigration rate λ𝜆\lambdaitalic_λ.
A
Having laid out what constitute a queueing system, let us consider some examples. The simplest arrival process is a renewal process, in which the interarrival times are independent and identical random variables. Renewal processes are denoted by G𝐺Gitalic_G in Kendall’s notation, which stands for general or unspecified interarrival time distribution. Special cases of renewal processes are the Poisson process denoted by M𝑀Mitalic_M (Markovian or memoryless), in which the interarrival times are exponential distributed, and the deterministic process denoted by D𝐷Ditalic_D, in which the interarrival times are fixed. The simplest service process is one in which the service time of each customer is taken from the same probability distribution. A general or unspecified service time distribution is denoted by G𝐺Gitalic_G, of which special cases are the exponential distribution denoted by M𝑀Mitalic_M, and the deterministic (degenerate) distribution denoted by D𝐷Ditalic_D.
Gene expression is a fundamental cellular process by which genetic information encoded by a gene is turned into a functional product, such as an RNA or protein molecule. Models of gene expression are typically concerned about the statistics of either RNA or protein counts as a proxy of gene activity; rarely the description of both is considered because the simultaneous measurement of RNA and protein in the same cell is challenging. In what follows, we will mostly focus on the RNA description of gene expression; where appropriate, we will discuss the protein description.
Fig. 2 summarizes various stochastic processes that are related to the MAP. The simplest MAP is the Poisson process (denoted by M𝑀Mitalic_M for Markovian or memoryless), which has only one state. This process describes a gene that is always active and produces RNA at exponentially distributed intervals [Fig. 2(b)]. One way to generalize the Poisson process is to have the arrival rate controlled by a finite-state Markov process. This process, which is called the Markov modulated Poisson process (MMPP), is a special case of the MAP in which D1subscript𝐷1D_{1}italic_D start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT is a diagonal matrix [78]. The simplest stochastic gene expression model with this arrival process is the leaky telegraph model [79], in which the gene switches between two states, both of which are transcriptionally active [Fig. 2(c)]. We note that in the MMPP, the gene remains in the same state immediately after producing RNA. A gene that produces RNA from multiple states, but is allowed to change state upon the production of RNA (in which case D1subscript𝐷1D_{1}italic_D start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT is no longer a diagonal matrix), is described by a general MAP.
The G/M/∞𝐺𝑀G/M/\inftyitalic_G / italic_M / ∞ queue is an infinite-server queue in which the interarrival times are independent and identically distributed random variables, customers arrive individually one by one, and the service times are exponentially distributed. It is a special case of the GX/G/∞superscript𝐺𝑋𝐺G^{X}/G/\inftyitalic_G start_POSTSUPERSCRIPT italic_X end_POSTSUPERSCRIPT / italic_G / ∞ queue with batches of size 1111 and exponential service times. Many models of gene expression can be mapped to this queue, some of which are shown in Fig. 4. The simplest is the one-state (birth-death) process in which the gene is always active and produces RNA at exponential intervals [Fig. 4(a)]. The popular telegraph model in which the gene switches between two states of activity and inactivity and produces RNA from the active state is shown in Fig. 4(b) [30]. Fig. 4(c) shows the ratchet model, which is a generalization of the telegraph model to multiple transcriptionally inactive states that are accessed sequentially [58, 33]. These three models have in common that the gene remains in the active state upon the production of RNA. In contrast, Fig. 4(d) shows the refractory model which accounts for the binding of RNA polymerase and its release into productive elongation, after which the gene switches back to an earlier state absent of RNA polymerase [80, 29]. Finally, Fig. 4(e) shows a canonical model of eukaryotic transcription [86, 87, 69] that includes the on and off switching of the promoter, the binding of six general transcription factors (IID, IIA, IIB, IIF, IIE and IIH) and RNA polymerase, the unwinding of the double-stranded DNA, and the promoter proximal pausing of RNA polymerase in metazoans [88, 89].
Transcription—the synthesis of RNA—is typically modelled as a multistep process in which the gene switches between multiple states before it eventually produces an RNA molecule. Depending on the level of details, transitions between gene states may reflect individual biochemical events, such as binding of transcription factors and RNA polymerase at the promoter, or more phenomenologically, a combination of these events that results in the gene being either active or inactive. Once the RNA is produced, it goes through a series of steps until it is eventually degraded. In the Markovian setting, these steps can be described by the following reaction scheme
A
Ultimately, we are interested in the joint law of a finite number of focal processes within an infinite system of such mean-field interacting MTBDPs.
We prove that the empirical distribution process of the N𝑁Nitalic_N replicas converges to a deterministic probability measure-valued flow as N→∞→𝑁N\to\inftyitalic_N → ∞.
We are now prepared to prove the convergence of the empirical measure process in a system with freezing.
To this end, we establish that the process of the empirical distribution of families converges to a deterministic probability measure-valued flow.
Next, we establish the exchangeability of the finite system and demonstrate the Markovianity of the empirical measure process.
C
One example is The CLIP (Contrastive Language-Image Pre-training) model (Radford et al.,, 2021) which is a transformer model that facilitates cross-modal understanding between images and text.
ESM2 is a protein language model that uses a transformer-based architecture and an attention mechanism to learn the interaction patterns between pairs of amino acids in the input sequence.
In addition to sequence-based approaches, graph-based representations leverage the three-dimensional (3D) structure of proteins to capture their functional properties.
It combines a ViT vision encoder, with a transformer-based language encoder to learn joint representations of images and their associated textual descriptions.
The transformer-based encoder-decoder model was first introduced by Vaswani et al., (2017) in their paper “Attention is all you need”.
C
\right)\right]\right\}\left(\boldsymbol{1}-\boldsymbol{M}\right)^{-\mathrm{T}}\,.- bold_italic_M ) start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT { over¯ start_ARG bold_italic_D end_ARG + roman_diag [ bold_italic_Q start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT ( bold_italic_D = over¯ start_ARG bold_italic_D end_ARG ) ] } ( bold_1 - bold_italic_M ) start_POSTSUPERSCRIPT - roman_T end_POSTSUPERSCRIPT .
Here 𝑫¯¯𝑫\overline{\boldsymbol{D}}over¯ start_ARG bold_italic_D end_ARG denotes the disorder-averaged noise
function Z~⁢(𝑱)~𝑍𝑱\widetilde{Z}\left(\boldsymbol{J}\right)over~ start_ARG italic_Z end_ARG ( bold_italic_J ). Here, 𝑿~~𝑿\widetilde{\boldsymbol{X}}over~ start_ARG bold_italic_X end_ARG
together with 𝑫¯¯𝑫\overline{\boldsymbol{D}}over¯ start_ARG bold_italic_D end_ARG yields an effective noise
D}}=\left(\overline{\mathrm{CV}^{2}}\overline{\nu}\right)^{2}\boldsymbol{F}⟨ italic_δ bold_italic_C start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ⟩ start_POSTSUBSCRIPT bold_italic_W , bold_italic_D end_POSTSUBSCRIPT = ( over¯ start_ARG roman_CV start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT end_ARG over¯ start_ARG italic_ν end_ARG ) start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT bold_italic_F
A
By monotonicity, it suffices to prove the result for β2=∞subscript𝛽2\beta_{2}=\inftyitalic_β start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT = ∞ and starting from the all 2 configuration.
In this case, the infected region generated by the 2 initially at site y𝑦yitalic_y is dominated by its counterpart when starting with a single 2 at site y𝑦yitalic_y in an otherwise healthy population.
Let Tysubscript𝑇𝑦T_{y}italic_T start_POSTSUBSCRIPT italic_y end_POSTSUBSCRIPT be the extinction time of the infected cluster starting at site y𝑦yitalic_y, and let τrsubscript𝜏𝑟\tau_{r}italic_τ start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT be the extinction time of the infected region generated by all the 2s initially in ΛrsubscriptΛ𝑟\Lambda_{r}roman_Λ start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT.
For all r>0𝑟0r>0italic_r > 0, starting with a single 2 at the origin in an otherwise healthy population, and identifying ξssubscript𝜉𝑠\xi_{s}italic_ξ start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT with the set of infected sites,
Assuming that d>1𝑑1d>1italic_d > 1 and starting the process with a single 1 at the origin in an otherwise healthy population,
A
}}\|_{F}^{2}+\frac{1}{2}\|t_{i}^{\text{syn}}-t_{i}^{\text{pred}}\|_{1}\right]caligraphic_L = italic_λ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT [ divide start_ARG 1 end_ARG start_ARG 9 end_ARG ∥ italic_R start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT syn end_POSTSUPERSCRIPT - italic_R start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT pred end_POSTSUPERSCRIPT ∥ start_POSTSUBSCRIPT italic_F end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT + divide start_ARG 1 end_ARG start_ARG 2 end_ARG ∥ italic_t start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT syn end_POSTSUPERSCRIPT - italic_t start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT pred end_POSTSUPERSCRIPT ∥ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ]
To address these challenges, we propose a self-supervised deep learning framework called HetACUMN based on amortized inference. By alternating the variational image reconstruction task and the conditional pose prediction task, the VAE-based architecture explicitly enforces the disentanglement of the conformation and pose latent space. The experiments on simulated datasets show that HetACUMN outerformed other armotized-inference-base methods like cryoFIRE. On the other hand, our method has comparable accuracy of pose estimation and even better performance in estimating conformational distribution than non-amortized methods. Furthermore, we demonstrated that HetACUMN can also be used on experimental datasets.
Therefore, the encoder can learn from a comprehensive pose training datasets even if the input EM image dataset is small or has highly biased
We argue that this is not an inherent drawback of amortized inference. HetACUMN is a better alternative when the data and/or computational resource is limited, which performed well on both small and large datasets.
The conditional pose prediction task takes the encoder and decoder from the mulit-class image reconstruction task and reversed order.
B
Biomedical image data science competitions have emerged as an effective way to accelerate the development of cutting-edge algorithms. Several successful competitions have been specifically organized for microscopy image analysis, such as the cell tracking challenge (CTC) [43, 33], the Data Science Bowl (DSB) challenge [3], and Colon Nuclei Identification and Counting Challenge (CoNIC) [14]. These competitions have played a crucial role in expediting the adoption of modern machine learning and deep learning algorithms in biomedical image analysis. However, it is worth noting that these challenges have primarily focused on a limited subset of microscopy image types.
Figure 2: Dataset overview. a, The challenge provides a diverse microscopy image dataset that includes tissue cells, cultured cells, label-free cells, stained cells, and different microscopes (i.e., brightfield, fluorescent, phase-contrast (PC), and (Differential Interference Contrast) DIC). b, The geographical distribution of data sources and challenge participants. The red, green, purple, and blue address icons denote the countries or regions where the brightfield, fluorescent, phase-contrast, and differential interference contrast image datasets are from, respectively. The size of the pink circle in each country is proportional to the number of participants from the corresponding country. c, The number of images in the training set. d, The number of labeled cells in the training set. e, Randomly selected examples (from left to right: brightfield, fluorescent, PC, and DIC images) from the training set (the 1st row) and testing set (the 2nd row). f, The number of images in the testing set. g, The number of cells in the testing set. There are two fluorescent whole-slide images (WSI) in the testing set.
For example, the CTC primarily concentrated on label-free images, thereby excluding stained images such as multiplexed immunofluorescent images. Similarly, the DSB challenge emphasized nucleus segmentation in fluorescent and histology images while disregarding phase-contrast and differential interference contrast images. The segmentation task in the CoNIC challenge is also limited to nucleus segmentation in H&E stained images.
Fig. 2e shows four microscopy images randomly selected from each modality in the training set and testing set. In order to assess the algorithm’s generalization capabilities, all testing images were sourced from new biological experiments, including some that featured previously unseen tissues or cell types not present in the training set. The testing set consisted of 120 brightfield images, 122 fluorescent images, 120 phase-contrast (PC) images, and 60 differential interference contrast (DIC) images (Fig. 2f). These quantities were determined based on the available images collected for the challenge.
We further visualized segmentation examples of the seven algorithms to gain insights into their characteristics (Fig 4f, Extended Data Fig. 1). The top three best-performing algorithms demonstrated relatively robust results, with the best-performing algorithm (T1-osilab) displaying exceptional accuracy across diverse microscope types, cell types, and image contrasts. Notably, KIT-GE exhibited better performance on phase-contrast images than stained images, as it was designed based on a label-free challenge dataset. Nevertheless, KIT-GE struggled to segment other images from new biological experiments, indicating limited generalization ability in this context. The Cellpose models outperformed Omnipose models on most images, except for DIC images featuring numerous small objects with low contrasts. Additionally, Cellpose-scratch model surpassed the Cellpose-pretrain on brightfield images, exhibiting fewer segmentation errors. However, its performance decreased on other modalities that contained previously unseen images, leading to an increased number of missed cells in the segmentation results.
B
When a finite number of oscillators is considered, other features may be exploited, each with their own limitations. When the network exhibits symmetries, it is possible to enumerate all phase-locked states with weak or strong coupling [20], but this method is not suited to work in the case of asymmetries [23]. In networks of neurons, the pulse-like shape of action potentials allows for the use of pulse coupling [11, 5, 4, 52, 40]. This approach yields analytically tractable results for weak or strong and possibly asymmetric coupling, but the number of oscillators is often limited to pairs. The study of network behavior can be made tractable by using piecewise smooth models, but coupling functions require particular assumptions such as linear coupling [9, 8], weak coupling [7, 49], and Laplacian coupling [43]. In addition, the analysis of phase-locked states is often restricted to understanding the stability of a synchronous network state [8, 10] (although some do consider the stability of splay states [7]).
In the present study, we address this gap in the literature by deriving a phase reduction method applicable to networks of (weakly or strongly attracting) coupled oscillators with arbitrary network topology beyond weak coupling, i.e., we calculate higher-order corrections to the first-order reduction methods while incorporating isostable coordinate(s). The formulation includes N𝑁Nitalic_N-body interactions on simplicial complexes and enables us to study the existence and stability of phase-locked states in a manner not possible using the original models.
When a finite number of oscillators is considered, other features may be exploited, each with their own limitations. When the network exhibits symmetries, it is possible to enumerate all phase-locked states with weak or strong coupling [20], but this method is not suited to work in the case of asymmetries [23]. In networks of neurons, the pulse-like shape of action potentials allows for the use of pulse coupling [11, 5, 4, 52, 40]. This approach yields analytically tractable results for weak or strong and possibly asymmetric coupling, but the number of oscillators is often limited to pairs. The study of network behavior can be made tractable by using piecewise smooth models, but coupling functions require particular assumptions such as linear coupling [9, 8], weak coupling [7, 49], and Laplacian coupling [43]. In addition, the analysis of phase-locked states is often restricted to understanding the stability of a synchronous network state [8, 10] (although some do consider the stability of splay states [7]).
The most relevant reduction for the present study is the theory of weakly coupled oscillators, which allows for a general form of the vector field and coupling function so long as the coupling strength is weak [15, 26, 46, 48, 49, 47]. To be more precise, by weak coupling, we mean phase reductions that only consider expansions up to first order in coupling strength (often represented by ε𝜀\varepsilonitalic_ε), and are thus generally only guaranteed to be valid for arbitrarily small ε𝜀\varepsilonitalic_ε. The weak assumption is a severe limitation because it cannot necessarily be used to accurately capture the dynamical behavior of coupled oscillator networks in many biological networks, e.g., cortical networks [53, 6], subcortical networks [62], and pacemaker networks [3, 19]. Indeed, recent studies have pushed beyond the weak coupling regime by deriving correction terms in higher orders of the coupling strength (i.e., non-weak coupling), but these too have their limitations. Higher order phase correction terms considered in [55, 18, 64] require the underlying limit cycle to be strongly attracting, limiting their applicability when Floquet multipliers are close to unity [67]. Recently developed isostable coordinates have proven invaluable towards developing more robust phase reductions, e.g., [69, 50, 42]. However, these methods have only been applied to pairs of oscillators without heterogeneity (except in [42], where the authors consider the complex Ginzburg-Landau model for N≥2𝑁2N\geq 2italic_N ≥ 2 and the Morris-Lecar model for N=200𝑁200N=200italic_N = 200), and a recently-published article by [34] closely mirrors our assumptions, but is only valid for planar systems. The recent work by Nicks et al [42] is of significant relevance to this paper, and we briefly contrast our results in the Discussion (Section 6).
Second, we use first order averaging, which is technically valid for small ε𝜀\varepsilonitalic_ε comparable to those used in weak coupling theory. This limitation is especially apparent in the last example, where the thalamic model is near a SNIC bifurcation and the reciprocal of the period (1/44 ms≈0.0231times44ms0.0231/$44\text{\,}\mathrm{m}\mathrm{s}$\approx 0.0231 / start_ARG 44 end_ARG start_ARG times end_ARG start_ARG roman_ms end_ARG ≈ 0.023) places an approximate upper bound on the coupling strength ε𝜀\varepsilonitalic_ε, as ε𝜀\varepsilonitalic_ε must be much smaller than 1/T1𝑇1/T1 / italic_T [32]. This example may benefit from higher-order averaging methods [31, 32] could be used. In addition, we have observed phase drift in the full model (data not shown) in a manner that may not be possible to capture in the current formulation. For example, with N=3𝑁3N=3italic_N = 3 homogeneous oscillators and some values of ε𝜀\varepsilonitalic_ε, two oscillators synchronize and the third exhibits a phase drift, effectively resulting in a 2 oscillator system with a drift in the remaining phase difference. In our formulation, a single phase difference equation can not exhibit drift without heterogeneity. This discrepancy may be due to ignoring transients in the isostable coordinates – if we were to include explicit isostable dynamics such as in [42], this behavior may be captured.
C
{2}\right)\;.× ( italic_C start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT + italic_i ∫ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_k start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT end_POSTSUPERSCRIPT roman_d italic_k start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT italic_k start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_α - 1 end_POSTSUPERSCRIPT italic_e start_POSTSUPERSCRIPT - 2 italic_W start_POSTSUBSCRIPT 1 , 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ( 0 ) end_POSTSUPERSCRIPT italic_k start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT end_POSTSUPERSCRIPT ) + caligraphic_O ( [ italic_σ start_POSTSUPERSCRIPT italic_α end_POSTSUPERSCRIPT ] start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ) .
It turned out to be seemingly impossible to generalize the the pseudocumulant formalism to rational fractional α=L/N𝛼𝐿𝑁\alpha=L/Nitalic_α = italic_L / italic_N via expansions of the logarithm Φ⁢(k,t)Φ𝑘𝑡\Phi(k,t)roman_Φ ( italic_k , italic_t ) of the characteristic function in series with respect to k1/Nsuperscript𝑘1𝑁k^{1/N}italic_k start_POSTSUPERSCRIPT 1 / italic_N end_POSTSUPERSCRIPT or some other rational fractional powers of k𝑘kitalic_k. Noticeably, the circular cumulant approach Tyulkina-etal-2018 ; Goldobin-Dolmatova-2019b was reported to be useful for dealing with fractional α𝛼\alphaitalic_α-stable noises in term of oscillation phases Dolmatova-Tyulkina-Goldobin-2023 at least in the case of an additive-in-phase noise, which is of course not our case, where the additive-in-voltage noise corresponds to a multiplicative noise in terms of the oscillation phase.
We have addressed the problem of mathematical description of the macroscopic dynamics of populations of quadratic integrate-and-fire neurons subject to α𝛼\alphaitalic_α-stable white noises. The interest to these models are multifold: QIF is not only the normal form for the neuron models with the Class I excitability near the threshold between the excitable regime and the regime of periodic spiking, but also is mathematically equivalent to the problem of the Anderson localization in one-dimensional setup Lifshitz-Gredeskul-Pastur-1988 in condensed matter. The mathematical challenges emerging in our study are related to the specificity of QIF but not to the nature and topology of the connection network. While we report on the case of the recurrent network of chemical synaptic all-to-all connections, the formalism can be readily extended to the cases of balanced networks with sparse synaptic connections, Volo-Torcini-2018 ; Volo-etal-2022 electrical synapses, Laing-2015 ; Pietras-Devalle-Roxin-etal-2019 ; Montbrio-Pazo-2020 etc.
In the latter equation, we can substitute expansions of all functions in series of k𝑘kitalic_k and find
Nonetheless, for α𝛼\alphaitalic_α-stable noises, one can construct expansions of Φ⁢(k,t)Φ𝑘𝑡\Phi(k,t)roman_Φ ( italic_k , italic_t ) in series of the noise intensity σαsuperscript𝜎𝛼\sigma^{\alpha}italic_σ start_POSTSUPERSCRIPT italic_α end_POSTSUPERSCRIPT. The theoretical results derived with the latter expansion for a population of QIFs with excitatory synaptic coupling subject to non-Gaussian noise are in good agreement with the results of numerical simulation for both homogeneous (Fig. 3) and heterogeneous populations (Fig. 4). One observes a reasonable accuracy even for the bifurcation curves of the noise-driven regimes for as large noise amplitude as σ=0.5𝜎0.5\sigma=0.5italic_σ = 0.5 (the right-hand-side branches of the cusps in Figs. 3 and 4).
C
The code supporting the conclusions of this study is available on GitHub at https://github.com/jgornet/predictive-coding-recovers-maps. The repository contains the Malmo environment code, training scripts for both the predictive coding and autoencoding neural networks, as well as code for the analysis of predictive coding and autoencoding results. Should there be any questions or need for clarifications about the codebase, we encourage readers to raise an issue on the repository or reach out to the corresponding author.
Moreover, we study the predictive coding neural network’s representation in latent space. Each unit in the network’s latent space activates at distinct, localized regions—called place fields—with respect to physical space. At each physical location, there exists a unique combination of overlapping place fields. At two locations, the differences in the combinations of overlapping place fields provides the distance between the two physical locations. The existence of place fields in both the neural network and the hippocampus\autociteokeefePlaceUnitsHippocampus1976 suggest that predictive coding is a universal mechanism for mapping. In addition, vector navigation emerges naturally from predictive coding by computing distances from overlapping place field units. Predictive coding may provide a model for understanding how place cells emerge, change, and function.
In the previous section, we demonstrate that the predictive coding neural network captures spatial relationships within an environment containing more internal spatial information than can be captured by an auto-encoder network that encodes image similarity. Here, we analyze the structure of the spatial code learned by the predictive coding network. We demonstrate that each unit in the neural network’s latent space activates at
The code supporting the conclusions of this study is available on GitHub at https://github.com/jgornet/predictive-coding-recovers-maps. The repository contains the Malmo environment code, training scripts for both the predictive coding and autoencoding neural networks, as well as code for the analysis of predictive coding and autoencoding results. Should there be any questions or need for clarifications about the codebase, we encourage readers to raise an issue on the repository or reach out to the corresponding author.
All datasets supporting the findings of this study, including the latent variables for the autoencoding and predictive coding neural networks, as well as the training and validation datasets, are available on GitHub at https://github.com/jgornet/predictive-coding-recovers-maps. Researchers and readers interested in accessing the data for replication, verification, or further studies can contact the corresponding author or refer to the supplementary materials section for more details.
D
(B) Architecture of the EEG encoder. Temporal-spatial convolution is used with spatial modules, made with self and graph attention, to reveal spatial features of brain activity. The linear layer is used to project the feature dimension.
To address the above limitations, we introduce a self-supervised framework to decode image representations from EEG signals, focusing on object recognition.
Beyond the self-supervised framework, we try to demonstrate the biological plausibility by resolving the visual processing of EEG signals.
We propose a self-supervised framework, Natural Image Contrast EEG (NICE), to decode images from EEG signals.
In conclusion, we propose a self-supervised framework to decode natural images from EEG for object recognition.
C
There is a natural interpretation of q⁢(s)𝑞𝑠q(s)italic_q ( italic_s ) as the chance of success in a single trial. In the simple case with no death (i.e. μ=0𝜇0\mu=0italic_μ = 0), the value simplifies to q⁢(s)=1−e−λ⁢s𝑞𝑠1superscript𝑒𝜆𝑠q(s)=1-e^{-\lambda s}italic_q ( italic_s ) = 1 - italic_e start_POSTSUPERSCRIPT - italic_λ italic_s end_POSTSUPERSCRIPT, which is the probability that a single individual alive at time 00 gives birth to at least one new offspring at or before time s𝑠sitalic_s. Because the births occur in exponential time, the distribution of the total progeny Nssubscript𝑁𝑠N_{s}italic_N start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT assigns equal probability density to every tree relating the offspring. As a result, any new offspring generated may be viewed as having an equal chance of having some offspring by time s𝑠sitalic_s as if it existed at time 00. Eventually, some new offspring will fail to give birth to any further offspring, terminating the process. In this setting, “failure” corresponds to a single individual having no new offspring. For general μ>0𝜇0\mu>0italic_μ > 0, the interpretation is similar, though the success probability 1−q⁢(s)1𝑞𝑠1-q(s)1 - italic_q ( italic_s ) additionally incorporates assumptions of survival.
We will utilize all components of the GDL model to perform species tree estimation. As stated in the Introduction, the primary focus will be the observed numbers of copies at speciation nodes in the species tree. We review known results and provide new technical results in the Appendix. The accumulation of these results is presented in the following Theorem.
Before proving the Propositions, it will be useful to give an intuition behind the meaning of the results for phylogeneticists and other practitioners. Before and after the uniform sampling step, the gene tree expresses meaningful signal about the overarching species tree. Of course, if any species receives zero copies of a given gene, then the gene tree offers no information at all about that species. We condition on survival of the gene in all extant species, so this does not occur.
The specific form of Mf⁢(τ)subscript𝑀𝑓𝜏M_{f}(\tau)italic_M start_POSTSUBSCRIPT italic_f end_POSTSUBSCRIPT ( italic_τ ) is outlined in the Appendix. Unfortunately, this integral is difficult to compute analytically. However, numerical methods could be useful to characterizing the expectation of this ratio. This result could conceivably be used to give exact expressions for the probability of each tree topology, and an implicit formula is given in Proposition 15. Explicit tools are developed for rooted trees with three species in the Appendix, though the results do not find application beyond those of Legried et al. (2021). In the rest of this paper, we focus on deriving simpler one-sided results to obtain relevant results for rooted trees with four species.
In this paper, the distribution of gene trees is described further for gene trees generated under GDL. With this further information, we describe when anomaly zones can exist for gene trees generated under GDL for rooted species trees on either three or four species. As with anomalous gene trees in the multispecies coalescent model, the lengths of interior edges of the species tree are important. As the interior branch lengths in the species tree grow to infinity, the probability that the gene tree topology coincides with that of the species tree goes to 1111. The discordant gene trees have less probability. Similarly for GDL, species trees with longer interior edges have lower probabilities of discordant gene trees. However, the parameters governing birth and death are also relevant. As observed in Hill et al. (2022), when the per-capita birth rate is high, the number of edges is high and the signal emitted by the species tree diminishes. Conversely, when the birth rate is 0, every discordant gene tree has probability zero, for any setting of branch lengths in the species tree. Similar effects occur when the death rate is high enough to prevent excessive branching in the GDL process, but explicit quantitative results are required to understand this effect. This paper provides results that aid in intuiting the connection between the birth and death rates and gene tree discordance, but the focus is on the number of copies in the ancestral population rather than the birth and death rates themselves. The main results apply to any choice of birth and death rates and species trees with three or four leaves.
A
Hence, the computation of a single extreme first hitting time requires O⁢(N⁢eΛ/ϵΔ⁢t)𝑂𝑁superscript𝑒Λitalic-ϵΔ𝑡O(N\frac{e^{\Lambda/\epsilon}}{\Delta t})italic_O ( italic_N divide start_ARG italic_e start_POSTSUPERSCRIPT roman_Λ / italic_ϵ end_POSTSUPERSCRIPT end_ARG start_ARG roman_Δ italic_t end_ARG ) time steps of length Δ⁢tΔ𝑡\Delta troman_Δ italic_t.
It is counter-intuitive that an extreme first passage time is highly likely to be caused by a pathway that is highly unlikely to occur in an individual walker.
Perhaps the extreme first passage time is an unlikely deviation from the typical rare event, an event that is doubly rare, in some sense.
Moreover, it is quite likely that the first passage time, because it is a rare event, is highly sensitive to discretization error.
There are many situations, particularly those involving exponential proliferation, where it is not the mean first passage time of a single event that is of interest, rather it is the first out of N𝑁Nitalic_N identical rare events to occur that is relevant [37].
C
A recent work demonstrated that the weight uncertainty with the form of SaS structure can be also incorporated into the transformer [45]. In addition, gated recurrent neural networks with multiplicative mechanisms were recently shown to be able to learn to implement linear self-attention [46]. Furthermore, the relationship between linear transformers allowing for faster autoregressive learning and RNNs was established in a recent work [47]. Taken together, our current work would be a starting point to establish the bridge between the biological learning (towards the science of specialized brain circuits) and transformer learning within the seminal predictive coding hypothesis, which can be put in the theoretically solid variational free energy minimization conceptual framework.
Grant Number 12122515 (H.H.), and Guangdong Provincial Key Laboratory of Magnetoelectric Physics and Devices (No. 2022B1212010008),
Our proposed MPL achieves equal or even better performance compared with traditional methods in all three tasks, showing the advantage of ensemble predictive coding, since examples of single networks can be readily sampled from the trained distribution [28, 18]. By analyzing the distribution of hyperparameters, we are able to find that most connections are deterministic in the input and recurrent layer, while the output layer has a higher level of variability. The observation that the output connections bear a higher level of variability is a universal result in all three tasks, which may particularly connect to the generative function of the language processing model. The network performance changes non-linearly and continuously with data load α=MN𝛼𝑀𝑁\alpha=\frac{M}{N}italic_α = divide start_ARG italic_M end_ARG start_ARG italic_N end_ARG, where M𝑀Mitalic_M is the training data size and N𝑁Nitalic_N is the number of neurons in the circuit, and we found that the critical point is given by αc≈0.02subscript𝛼𝑐0.02\alpha_{c}\approx 0.02italic_α start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT ≈ 0.02, beyond which a chance level of prediction is absent. With increasing the size of training data, the performance further improves until a perfect learning is achieved. We can then test the resulting network to generate text of arbitrary length (to create something is a first step to understand that thing), and the generated text follows perfectly the grammatical rule set before training. In addition, our MPL is able to accomplish comparable performances in the Penn Treebank corpus with other training methods in RNN, although the framework is less accurate than the transformer structure, which thereby calls for further studies about the mechanistic difference between biological learning and non-biological transformer learning, and how the latter one can inspire discovery of new fundamental elements of computation that can realize logical and mathematical reasoning in many different tasks [29, 30].
To study the network behavior, we plot the distribution of hyperparameters m𝑚mitalic_m, π𝜋\piitalic_π, ΞΞ\Xiroman_Ξ when the RNN network is trained with the MPL method, as shown in the Fig. 6. We find that the mean weight m𝑚mitalic_m for all layers is symmetrically distributed around zero, with a relatively narrow distribution. The distribution of π𝜋\piitalic_π for all layers is of an L-shape and peaks at π=0𝜋0\pi=0italic_π = 0, indicating a dense network is favored and formed after learning. The distribution of ΞΞ\Xiroman_Ξ is of the U-shape and has two peaks. One peak is at Ξ=0Ξ0\Xi=0roman_Ξ = 0, indicating that these weights are deterministic and could only take a single value of m𝑚mitalic_m, and the other peak is at Ξ≃0.01similar-to-or-equalsΞ0.01\Xi\simeq 0.01roman_Ξ ≃ 0.01, indicating that the corresponding connection can carry a range of candidate values. Currently, it remains unknown how to relate these microscopic details of the network structure to the decoding of the semantic information in the corpus. It is thus important in future works to design analytically tractable model of language processing bridging neurophysiological plausibility and superperformance observed in the state-of-the-art architectures, which would help to uncover key neuron, synapse, and circuit motif types in the human brain.
To study the properties of this simplified language model, we plot the distribution of hyperparameters [π,m,Ξ]𝜋𝑚Ξ[\pi,m,\Xi][ italic_π , italic_m , roman_Ξ ] for the input layer, output layer, and recurrent layer, respectively. The distribution of [π,Ξ]𝜋Ξ[\pi,\Xi][ italic_π , roman_Ξ ] has the L-shape in all layers, while the output layer allows for more variability in both sparsity and variance of the Gaussian slab, which is characterized by a slightly broader distribution of [π,Ξ]𝜋Ξ[\pi,\Xi][ italic_π , roman_Ξ ]. Extremes π=0,π=1formulae-sequence𝜋0𝜋1\pi=0,\pi=1italic_π = 0 , italic_π = 1 and Ξ=0Ξ0\Xi=0roman_Ξ = 0 have particular physics significance. π=0𝜋0\pi=0italic_π = 0 indicates the connection has no sparsity, and thus carries important information for the task. The spike mass at π=1𝜋1\pi=1italic_π = 1 implies that the connection is always zero, and thus is not important for the task, but none of the connections of our model belong to this case. Ξ=0Ξ0\Xi=0roman_Ξ = 0 shows the corresponding connection is deterministic, because the corresponding Gaussian distribution reduces to a Dirac delta peak. This result is also observed in the 28 by 28 MNIST classification task. The distribution of hyperparameter m𝑚mitalic_m is broadest in the output layer, ranging from −200200-200- 200 to 200200200200, showing the higher-level variability in the connection weight of the output layer. This phenomenon may have close relationship with the fact that the embedded rule can only be retrieved by using a highly heterogeneous weighting of each neuron’s activity in the reservoir, which is particularly interesting from the perspective of neural decoding of language information and probabilistic computation in a biological plausible setting [15, 10, 30], since our embedded rule is actually a probabilistic generative rule mixed with a predefined grammatical structure.
A
Organoids are self-organized 3D tissues typically derived from stem cells, exhibiting key functional, structural, and biological complexity similar to organs [1]. Their close biological resemblance makes organoid culture analysis crucial for advancing biological studies, as it aids in understanding the extent to which organoids resemble their in vivo counterparts.
In this paper, we utilized the SegmentAnything model in automatic organoid structure identification in microscopy images. We claim that the SegmentAnything model showed promising performance, and our post-processing efforts were also necessary to enhance the accuracy of organoid structure detection and ensure reliable organoid morphology analysis. Overall, this research contributes to the field of organoid analysis in microscopy images by presenting an efficient approach for individual organoid detection and morphology analysis without any pre-requisites on data annotation. The automated pipeline offers promising avenues for accelerating and enhancing organoid features characterization and quantification, paving the way for further advancements in organoid research and related disciplines.
The first issue we encountered was that SegmentAnything sometimes misidentified the background as an object, resulting in non-zero indices for the background in the masks. Secondly, the high resolution of whole microscopy images necessitated the use of cropped patches for model fitting. However, this approach introduced incomplete organoids along the edges of the patches, leading to erroneous analysis of morphological properties. To address these concerns, we implemented an automated process where we the boundaries of the image patches were examined, and all objects located in these regions were excluded. A third challenge was observed with organoids possessing a lumen structure, where the model inaccurately demarcated the regions into two separate objects. To rectify this problem, we computed the maximum boundary of each mask and unified all values within this boundary. Lastly, debris might be erroneously identified as objects (organoids in this scenario) by the model. Unfortunately, we have not yet found an automated method to remove them. Thus, we manually marked these non-organoid structures and deleted them, which, when compared to manually identifying all organoid structures, proved to be a relatively simpler task.
In this study, we explore the potential of SegmentAnything [4], a foundation model trained on an extensive dataset of 11 million images encompassing diverse modalities, to automate individual organoid detection in microscopy images. Moreover, we have integrated comprehensive post-processing and analysis of morphological properties using the masks generated by SegmentAnything. The workflow is demonstrated in Fig. 1. Our main claim is that this proposed pipeline enables both automatic and accurate organoid detection, as well as fully automated organoid morphology analysis.
The analysis of organoid morphology is commonly performed by capturing images of the organoids grown in multi-well plates. However, existing methods have limitations since they aggregate cell growth information over an entire well, rather than providing information about individual organoids and their constituent cells [2]. Unfortunately, manually demarcating organoids in microscopy images poses significant challenges. The sheer number of organoids in a single whole slice microscopy image can reach thousands, making manual demarcation a laborious and time-consuming task.
D
\mathbf{Z}_{t}),\quad\mathbf{Z}_{0}=\bm{\theta}^{(k,0)},divide start_ARG italic_d bold_Z start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT end_ARG start_ARG italic_d italic_t end_ARG = - divide start_ARG 2 end_ARG start_ARG 3 end_ARG italic_A start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT italic_α ∇ italic_L ( bold_Z start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ) , bold_Z start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT = bold_italic_θ start_POSTSUPERSCRIPT ( italic_k , 0 ) end_POSTSUPERSCRIPT ,
∇L is bounded and Lipschitz continuous with Lipschitz constant λ.∇L is bounded and Lipschitz continuous with Lipschitz constant λ\displaystyle\mbox{$\nabla L$ is bounded and Lipschitz continuous with %
is bounded by a multiple of n−1superscript𝑛1n^{-1}italic_n start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT.
Lipschitz constant $\lambda$}.∇ italic_L is bounded and Lipschitz continuous with Lipschitz constant italic_λ .
is bounded by a multiple of n−1superscript𝑛1n^{-1}italic_n start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT and hence its square by n−2superscript𝑛2n^{-2}italic_n start_POSTSUPERSCRIPT - 2 end_POSTSUPERSCRIPT.
A
This work is part of the D-ITP consortium, a program of the Netherlands Organisation for Scientific Research (NWO) that is funded by the Dutch Ministry of Education, Culture and Science (OCW). This work is also supported by a National Research Foundation of Korea (NRF) grant funded by the Korean government (MSIP) (2020R1A2C2009093) and by the Korea Environment Industry & Technology Institute (KEITI) through its Ecological Imitation-based Environmental Pollution Management Technology Development Project funded by the Korea Ministry of Environment (MOE) (2019002790007).
T.M.K. conceptualized the work and developed the theory; J.K. carried out the experiments; K.K. contributed to carrying out the experiments; W.Q.B. contributed to developing the theory; C.S., J.P. and R.v.R. supervised the research. All authors discussed the results and contributed to the manuscript.
Up to this point, we treated the system in steady-state. When extending our view to the device dynamics we need to consider the time it takes for ions to accumulate into or deplete out of the channel. Utilizing our aforementioned expression for the total salt flux through the channel, we calculate the net flux γ⁢V′𝛾superscript𝑉′\gamma V^{\prime}italic_γ italic_V start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT into the channel upon a small applied voltage V′superscript𝑉′V^{\prime}italic_V start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT and find (see SI for details) that γ∝D/Lproportional-to𝛾𝐷𝐿\gamma\propto D/Litalic_γ ∝ italic_D / italic_L, with L𝐿Litalic_L the channel length and D𝐷Ditalic_D the ionic diffusion coefficient. The contributions to the net flux solely come from the conductive, i.e. voltage-driven, flux term in the Nernst-Planck equation. The proportionality to D/L𝐷𝐿D/Litalic_D / italic_L is intuitive as the electric field strength in the channels is proportional to 1/L1𝐿1/L1 / italic_L and all flux terms are proportional to the ionic mobilities and hence to D𝐷Ditalic_D. With our expression for the slab-averaged salt concentration profile we also calculate the total change in salt α⁢V′𝛼superscript𝑉′\alpha V^{\prime}italic_α italic_V start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT upon applying the small voltage V′superscript𝑉′V^{\prime}italic_V start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT, and we find α∝Lproportional-to𝛼𝐿\alpha\propto Litalic_α ∝ italic_L. This proportionality to L𝐿Litalic_L again is intuitive, as the volume of the channel scales with L𝐿Litalic_L. The ratio α/γ𝛼𝛾\alpha/\gammaitalic_α / italic_γ between this total change in salt and net flux provides an estimate for the concentration polarisation timescale, given by
The PNP equations form an effective theoretical framework to analyse ion transport in charged porous materials [42]. However, the complex three-dimensional geometric structure of the NCNM, with features on length scales varying from the colloidal surface-surface distance all the way up to the channel length, introduces intricate numerical challenges for fully spatially resolved solutions of the PNP equations. To simplify, we consider slab-averages, i.e. the average along a cross section [38, 43, 44, 40, 41], of the electric potential and the ionic concentrations in the porous structure between the colloids. Although this sacrifices on nanoscale details, it does account for the pinched electric field lines towards the channel tip and for the spatial variation of the ionic charge density. Through this method we reduce the three-dimensional Nernst-Planck equation to a one-dimensional form, providing an expression for the total salt and charge flux through the channel. The divergence of the total salt flux qualitatively shows that the experimentally observed inhomogeneous ionic space charge density forms a source (sink) term of salt, resulting in salt accumulation (depletion) upon a positive (negative) applied voltage V𝑉Vitalic_V. Quantitatively, a divergence-free steady-state condition on the total salt flux provides a differential equation for the voltage-dependent slab-averaged salt concentration profile, which we solve analytically. By viewing the channel as a series of conductive slabs, with the conductance of each slab proportional to the (now known) voltage-dependent salt concentration, we calculate the steady-state channel conductance g∞⁢(V)=I⁢(V)/Vsubscript𝑔𝑉𝐼𝑉𝑉g_{\infty}(V)=I(V)/Vitalic_g start_POSTSUBSCRIPT ∞ end_POSTSUBSCRIPT ( italic_V ) = italic_I ( italic_V ) / italic_V. This describes how an increase (decrease) in salt in the channel at positive (negative) voltages makes the channel more (less) conductive. Our theory thus quantitatively confirms the experimental hypothesis that the ionic space charge distribution results in salt concentration polarisation and hence in current rectification [36]. Moreover, leveraging the general analytical nature of our theory, we demonstrate that any inhomogeneous ionic space charge density in generic channels (provided they are well-described by slab-averaged PNP equations) is the key ingredient for a source-sink term of salt and thus for current rectification, derived in detail in the SI. Therefore we not only provide a mechanistic insight as to how the space charge leads to current rectification in the channel of present interest, but this understanding could also explain current rectification in channels with other sources of space charge densities and with other geometries [23, 37]. Furthermore, this insight may provide inspiration for future design of devices that exhibit current rectification.
Our memristor is inspired and supported by a comprehensive theory directly derived from the underlying physical equations of diffusive and electric continuum ion transport. We experimentally quantitatively verified the predictions of our theory on multiple occasions, amongst which the specific and surprising prediction that the memory retention time of the channel depends on the channel diffusion time, despite the channel being constantly voltage-driven. The theory exclusively relies on physical parameters, such as channel dimensions and ion concentrations, and enabled streamlined experimentation by pinpointing the relevant signal timescales, signal voltages, and suitable reservoir computing protocol. Additionally, we identify an inhomogeneous charge density as the key ingredient for iontronic channels to exhibit current rectification (provided they are well-described by slab-averaged PNP equations). Consequently, our theory paves the way for targeted advancements in iontronic circuits and facilitates efficient exploration of their diverse applications.
A
The first documented cases of HIV/ZIKV co-infection in Colombia and Brazil highlighted the potential interactions between these two viruses. The co-circulation of both illnesses in South America presented an important challenge for the public health authorities, as both viruses share the same transmission mechanism, which is sexual contact, and overlap clinical symptoms.
Given the potential impact of HIV/ZIKV co-infection on public health, it is crucial to understand the transmission dynamics of these viruses and evaluate the effectiveness of intervention strategies. Mathematical models are useful for understanding and providing insights into public health policy decisions. To our knowledge, there is no evidence of mathematical models studying this phenomenon in the literature. Therefore, this study aimed to formulate and analyze an HIV/ZIKV co-infection model, assuming that both viruses are sexually transmitted and ZIKV is also mosquito-transmitted. From the analysis of this model is expected to identify important transmission outcomes that would help to design and evaluate different control and prevention strategies to minimize their impact on public health.
The mathematical modelling conducted in this study was used to theoretically represent the transmission dynamics of both viruses in co-infected individuals, allowing for the evaluation of different intervention scenarios. The findings of this study were particularly relevant during the 2015-2016 Zika outbreak in South America, which coincided with the region’s chronic HIV epidemic. By identifying the patterns of this co-infection and their potential impact on transmission, the results of this study could inform public health strategies to control and prevent the spread of both viruses.
While this study has provided insightful information, it is crucial to recognize several limitations in its methodology. First, the mathematical modelling approach, which is essential to capture the innate complexity of viral transmission dynamics, introduces certain simplifications. These oversimplifications result from the challenge of modelling a biological phenomenon in a more accessible manner, avoiding potential barriers to general reader comprehension and unnecessary complexities in the presentation of results. The interactions among susceptible, co-infected, and infected individuals with a single pathogen can yield a multitude of scenarios, making it extremely challenging to predict outcomes owing to the complex nature of infection pathways and potential combinations of results. For instance, a susceptible individual engaging in sexual contact with a co-infected person may contract Zika, HIV, or both simultaneously, creating a scenario that is not easily representable by mathematical equations. Consequently, our model assumed that co-infected individuals could transmit the virus only through vectorial contact. Second, the study’s reliance on numerical simulations with parameter values specific to Colombia and Brazil reduces the generalizability of the findings to other geographical locations. Diverse epidemiological landscapes, varying healthcare practices, and demographic gaps across regions may result in distinct patterns of Zika and HIV co-infection dynamics. Additionally, this study’s dependence on limited historical data and assumptions regarding intervention effectiveness may not fully capture the dynamic nature of evolving public health strategies. Current developments in medical interventions, shifts in public health policies, and the emergence of new viral variants can significantly affect the effectiveness of the proposed intervention measures. These limitations emphasize the need for care when extending the study’s findings. Additionally, more research is necessary to improve the mathematical models’ applicability under different scenarios and the challenges in terms of public health
In summary, this study highlighted the need for continued research on the transmission dynamics of Zika and HIV/AIDS and developing effective intervention strategies to control and prevent their spread. Future work in this field plans to incorporate compartments of women giving birth to babies with and without congenital malformations to understand the impact of co-infection in children better. Including these compartments would allow for a more detailed assessment of the long-term effects of co-infection on child health outcomes, including the potential for developmental delays, neurological deficits, and other complications. This approach could also facilitate the development of more targeted prevention and treatment strategies for the affected children and their families. Ultimately, this research is critical for improving our understanding of the complex interactions between HIV and Zika and developing effective public health interventions to mitigate their impact on affected individuals and communities.
B
In this case, a BG network with parallel channels, representing different action requests, arising from the cortex, is usually considered.
Resolution of competition between the channels may be given by selection of a desired channel with the highest salience.
𝒞dsubscript𝒞𝑑{\cal C}_{d}caligraphic_C start_POSTSUBSCRIPT italic_d end_POSTSUBSCRIPT, to the case of action selection. Saliency of each channel may be given by its 𝒞dsubscript𝒞𝑑{\cal C}_{d}caligraphic_C start_POSTSUBSCRIPT italic_d end_POSTSUBSCRIPT. Then, action in the channel with the highest
Saliency of a channel may be given by the firing frequency of its cortical input; the higher frequency denotes the higher saliency.
Due to more activeness of DP, the firing frequency of the SNr cells becomes much reduced to 5.5 Hz, resulting in the opened state of the BG gate to the
C
This approach can be misleading because there may be many good models for a given dataset – a phenomenon referred to as the Rashomon effect [7, 40] — and variables that are important for one good model on a given dataset are not necessarily important for others. As such, any insights drawn from a single model need not reflect the underlying data distribution or even the consensus among good models.
Related to our work from the stability perspective, Duncan et al. [13] developed a software package to evaluate the stability of permutation variable importance in random forest methods; we perform a similar exercise to demonstrate that current variable importance metrics computed for the Rashomon set are not stable. Additionally, Basu et al. [5] introduced iterative random forests by iteratively reweighting trees and bootstrapping to find stable higher-order interactions from random forests. Further, theoretical results have demonstrated that bootstrapping stabilizes many machine learning algorithms and reduces the variance of statistics [18, 8]. We also take advantage of bootstrapping’s flexibility and properties to ensure stability for our variable importance.
Recently, researchers have sought to overcome the Rashomon effect by computing Rashomon sets, the set of all good (i.e., low loss) models for a given dataset [15, 12]. However, the set of all good models is not stable across reasonable perturbations (e.g., bootstrap or jackknife) of a single dataset, with stability defined as in [50]. This concept of stability is one of the three pillars of veridical data science [51, 13]. Note that there is wide agreement on the intuition behind stability, but not its quantification [22, 33]. As such, in line with other stability research, we do not subscribe to a formal definition and treat stability as a general notion [22, 33, 50, 51]. In order to ensure trustworthy analyses, variable importance measures must account for both the Rashomon effect and stability.
In particular, for variable X2subscript𝑋2X_{2}italic_X start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT, one interval — ranging from -0.1 to 0.33 — suggests that there exist good models that do not depend on this variable at all (0 indicates the variable is not important); on the other hand, another MCR from a bootstrapped dataset ranges from 0.33 to 0.36, suggesting that this variable is essential to all good models. Because of this instability, different researchers may draw very different conclusions about the same data distribution even when using the same method.
Figure 1: Statistics of Rashomon sets computed across 500 bootstrap replicates of a given dataset sampled from the Monk 3 data generation process [42]. The original dataset consisted of 124 observations, and the Rashomon set was calculated using its definition in Equation 1, with parameters specified in Section D of the supplement. The Rashomon set size is the number of models with loss below a threshold. Model reliance is a measure of variable importance for a single variable — in this case, X2subscript𝑋2X_{2}italic_X start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT — and Model Class Reliance (MCR) is its range over the Rashomon set. Both the Rashomon set size and model class reliance are unstable across bootstrap iterations.
B
Our method draw insights from HVS — how humans perceive visual stimuli (forward route in Fig. 1) — to address the potential information loss during the transition from the fMRI to the visual domain (reverse route in Fig. 1).
We do that by deciphering crucial cues from fMRI recordings, thereby contributing to enhanced consistency in terms of appearance, structure, and semantics.
The qualitative results, depicted in Fig. 6, align with the numerical findings, indicating that DREAM produces more realistic outcomes that maintains consistency with the viewed images in terms of semantics, appearance, and structure, compared to the other methods.
We show through experiments that our biologically interpretable method, DREAM, outperforms state-of-the-art methods while maintaining better consistency of appearance, structure, and semantics.
This paper presents DREAM, a visual decoding method founded on principles of human perception. We design reverse pathways that mirror the forward pathways from visual stimuli to fMRI recordings. These pathways specialize in deciphering semantics, color, and depth cues from fMRI data and then use these predicted cues as guidance to reconstruct visual stimuli. Experiments demonstrate that our method surpasses current state-of-the-art models in terms of consistency in appearance, structure, and semantics.
A
Hence, as a future work, it would be interesting to investigate consequences of degeneration of D1 SPNs and cortical pyramidal cells, in addition to
In our present striatal circuit, we considered only the D1/D2 SPNs (95 %percent\%% major population).
Within the striatum, spine projection neurons (SPNs), comprising up to 95 %percent\%% of the whole striatal population, are the only primary output neurons Str1 ; Str2 . There are two types of SPNs with D1 and D2 receptors for the DA. The DA modulates firing activity of the D1 and D2 SPNs in a different way SPN1 ; SPN2 ; CN6 . In the early stage of HD, degenerative loss of D2 SPNs occurs due to mutation in the HTT gene, while DA level in the striatum is nearly normal Degen1 ; Degen2 ; Degen3 ; Degen4 .
In the present work, we considered early stage of HD where degenerative loss of D2 SPNs occurs in the nearly normal DA level.
Next, we consider the case of phasic cortical input (10 Hz) in the phasically active state Hump1 ; CI1 ; CI2 ; CI3 ; CI4 ; CI5 ; Str2 ; CN6 ; CN14 , which is shown in Fig. 3. Population firing behavior of D1 SPNs, associated with DP (green color), is shown in their raster plot of spikes and the IPSR RD1subscript𝑅D1R_{\rm D1}italic_R start_POSTSUBSCRIPT D1 end_POSTSUBSCRIPT(t) in Fig. 3(a). In comparison to the tonic case with the population-averaged MFR ⟨fi(D1)⟩delimited-⟨⟩superscriptsubscript𝑓𝑖D1\langle f_{i}^{\rm(D1)}\rangle⟨ italic_f start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ( D1 ) end_POSTSUPERSCRIPT ⟩ = 1.03 Hz in Fig. 2(a), firing activity of the D1 SPNs become very active with ⟨fi(D1)⟩delimited-⟨⟩superscriptsubscript𝑓𝑖D1\langle f_{i}^{\rm(D1)}\rangle⟨ italic_f start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ( D1 ) end_POSTSUPERSCRIPT ⟩ = 30.7 Hz, independently of xD2subscript𝑥D2x_{\rm D2}italic_x start_POSTSUBSCRIPT D2 end_POSTSUBSCRIPT.
A
The spread of a new mutation in a population as a function of time can be described by the transition matrix of an appropriate Wright-Fisher model or the transition density of the approximating diffusion. Although diffusion theory leads to quite simple expressions for many important quantities, explicit and analytically tractable time-dependent results, tracing for instance the distribution of allele frequencies, seem to be out of reach (see Steinrücken et al., 2013, for a semi-explicit approach).
We combined two methods to derive an explicit and accurate time-dependent approximation for the mutant’s density in any generation n𝑛nitalic_n: a branching process approach capturing the stochastic effects and the deterministic logistic growth model. We developed this approach quite generally with the help of a slightly supercritical Galton-Watson process that allows for quite general offspring distributions.
In Section 3, we derive an explicit, approximate expression for the mutant frequency distribution in a finite Wright-Fisher population as a function of time. This becomes feasible by using a supercritical Galton-Watson process with a quite general offspring distribution to describe the spreading of a beneficial mutant.
The expressions (4.14) for the expected mean G¯⁢(τ)¯𝐺𝜏\bar{G}(\tau)over¯ start_ARG italic_G end_ARG ( italic_τ ) and (4.15) for the expected variance VG⁢(τ)subscript𝑉𝐺𝜏V_{G}(\tau)italic_V start_POSTSUBSCRIPT italic_G end_POSTSUBSCRIPT ( italic_τ ) of the trait are exact within our model based on the quasi-deterministic branching process approach, i.e., assuming that the density gasubscript𝑔𝑎g_{a}italic_g start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT in (3.4) is exact. Comparison with the results from Wright-Fisher simulations shows that they are astonishingly accurate for the whole evolutionary process, i.e., from the initial phase until the quasi-stationary response has been achieved (Figure 4.2). These expressions require integration with respect to two parameters: one over the time span until the time τ𝜏\tauitalic_τ of interest, the other with respect to the distribution f⁢(α)𝑓𝛼f(\alpha)italic_f ( italic_α ) of mutation effects. The former is necessary because the extinction probability is time dependent, at least in the initial phase. The latter integration is unavoidable unless all mutation effects are equal, when accurate approximations involve only the computation of the finite sums given in Remark 4.9. We note that these approximations hold for very general offspring distributions, so that they can cover cases where the effective population size Nesubscript𝑁𝑒N_{e}italic_N start_POSTSUBSCRIPT italic_e end_POSTSUBSCRIPT differs from the actual population size N𝑁Nitalic_N. The offspring distribution enters these equations through the extinction probabilities Pextsubscript𝑃extP_{\rm{ext}}italic_P start_POSTSUBSCRIPT roman_ext end_POSTSUBSCRIPT.
Because it seems unfeasible to derive analytic results for the time dependence of the allele frequency for either the Wright-Fisher model or its diffusion approximation, we approximate the stochastic dynamics in the initial phase, where stochasticity is most important, by a branching process (e.g. Athreya and Ney, 1972; Allen, 2003; Haccou et al., 2005). During this initial phase, interactions between mutant lineages can be ignored if N𝑁Nitalic_N is large enough. The first step is to approximate the evolution of the frequency distribution of the mutant by a Galton-Watson process. In Section 3, we will approximate this discrete process by a continuous one and couple it with the deterministic allele-frequency dynamics to obtain an analytically accessible model for the long-term evolution of the mutant in a population of size N𝑁Nitalic_N.
A
This paper does not raise any ethical concerns. This study does not involve human subjects, practices to data set releases, potentially harmful insights, methodologies and applications, potential conflicts of interest and sponsorship, discrimination/bias/fairness concerns, privacy and security issues, legal compliance, and research integrity issues.
are in the form of the implicit force depending on the distance between atoms in the 3D range and the molecular structure and could involve atoms
The experimental setups for training and evaluation, as well as the hyperparameters, are described in detail in Section 4 and Appendix A, and the experiments are all conducted using public datasets.
Furthermore, we employ visual representation to depict the molecular structure of the mutagen compound containing the −N⁢O𝑁𝑂-NO- italic_N italic_O group, as illustrated in Fig. 6. In the primary layer, hydrogen atoms (H) and oxygen atoms (O) are segregated into distinct neural atoms, regardless of the multi-hop distance between them. This enables the model to get the atom representation corresponding to each element. Within the remaining layers, the constituents of −N⁢O𝑁𝑂-NO- italic_N italic_O (N:8 and O:9) are organized into a neural atom, enabling the model to capture their representation comprehensively. This facilitates the prediction of the molecular graph. It is worth noting that the observed grouping pattern is consistent with the interatomic interaction, as shown by the Ewlad sum matrix. The atoms denoted as (C:2-O:9) and (C:7-O:3) are assigned to distinct neural atoms. The model is capable of accurately representing the LRI even when there are multiple hops and intermediate atoms between the interacting entities. The presented visualization showcases the efficacy of our proposed methodology in facilitating the connection between remote atoms. Additionally, the atom groups identified in this study have the potential to significantly impact the accurate prediction of molecular graph features.
The allocation pattern for the neural atoms at each layer, as well as the interatomic interactions suggested by the Ewald sum matrix, are visualized in Fig. 5 to 6.
B
Here LLM’s memory was defined in terms of the conditional probability (1) through an appropriate construction of the preceding text fed to the network. This amounts to a functional definition of memory, with the LLM acting as if it were to participate in a serial memory test.
The goal of this paper is to identify and explore the features of the memory characteristics of Large Language Models and compare them to some aspects of human memory.
A very characteristic feature of human memory when memorizing lists of words is the fact that words from the beginning and from the end of the list are easier to recall, phenomena called primacy and recency effects [4] (see sample human data in Fig. 1).
The similarity of the characteristics of human biological memory to LLM’s memory can be a-priori interpreted in two ways:
Such a close similarity of the characteristics of human and LLM memory is in fact very surprising and begs explanation.
D
By generating the data from a log-normal distribution, we construct a synthetic data set from which it should be relatively easier to reconstruct a network in comparison to real data. Real data, unlike our simulated data, may be confounded by many factors, including environmental and climate effects, variable copy number of amplified genes (e.g. the 16S rRNA gene), or even a lack of a true underlying network of interactions. Furthermore, all of the simulated samples are drawn from the same underlying distribution, which may be thought of as taking samples from identical environments and ignoring spatial heterogeneity. This scenario therefore represents a near “best-case scenario” for network reconstruction. The results that follow should therefore be taken as “necessary but not sufficient” in judging a particular method to be useful in network reconstruction. It may be interesting to repeat the experiments with additional methods for generating synthetic data which simulate some of the real-world confounding variables that our method ignores.
To test the above theoretical problems involved in network reconstruction with paired data sets, we developed a simple algorithm for constructing synthetic data from a known “ground-truth” covariance matrix. To do this, we start with a random Power-law graph and construct a positive definite covariance matrix that matches the sparsity pattern of the graph but has positive and negative non-integer entries. This construction ensures that we have a sparse, small-world network of positively and negatively correlated nodes, representing synthetic taxa. The simulated exact absolute abundance of each taxa in each sample is then generated by drawing samples from a log-normal distribution using the ground truth covariance matrix and mean log-values drawn uniformly from the interval (−4,4)44(-4,4)( - 4 , 4 ). We then generate synthetic data by simulating sequencing experiments. To do this, we draw Risubscript𝑅𝑖R_{i}italic_R start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT “reads” from each sample, with Risubscript𝑅𝑖R_{i}italic_R start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT some hyper-parameter chosen from a normal distribution with mean 100000100000100000100000 and standard deviation 10000100001000010000 for each sample. Each simulated read is generated by drawing from the set of taxa with a discrete probability distribution equal to the relative abundance of the taxa in the exact sample. This simulates the real read process in which each amplicon that is counted has a probability of being classified as from some particular taxa approximately equal to the relative abundance of that taxa.
In the scenario of the rabbits and piñons, there are two compositional data sets, one which measures animals and one which measures plants. These sets are “paired” with each other in the sense that each vector in one data set, representing for example the number of each animal species counted in an area, can be paired with a vector in the other set representing the number of each plant species counted in the same area. These two vectors, coming from the same discrete sample area, can be thought of as a pair in the combined paired data set.
To simulate paired compositional data sets, we randomly split the taxa into two groups representing two kingdoms of life before drawing synthetic reads. Then, for each exact sample, we generate two independent sets of reads as described above — one for each of the two kingdoms. This means that for each sample, we generate a pair of synthetic data representing the relative abundances of each of within its group. This simulates the scenario in which the scientist counts animals and plants (or, in the case of microbiome analysis, bacteria and fungi) separately and has no estimate of the total biomass of either group.
This relatively simple example demonstrates a profound problem for scientists studying the microbiome, where instead of animals and trees we are concerned with, for example, bacteria and fungi. In this setting, taxonomic information is compositional by nature[1, 2], and there exists only limited ways to compute absolute biomass of taxa[3, 4]. Furthermore, data on two or more kingdoms of taxa (called “transkingdom” data) are often collected with separate methods for each kingdom. The most common example of this is the use of 16S rRNA amplicon sequencing to identify the bacteria in a sample paired with ITS rRNA amplicon sequencing to identify the fungi. The result is the relative abundance of each bacterial strain identified among bacteria, and separately the relative abundance of each fungal strain identified among fungi. Notably, the relative abundance of each taxa among the complete set of taxa is unknown. While established techniques can be used to handle compositional data[5, 6, 7], these techniques are designed to work when the composition is known relative to the total data set. When dealing with transkingdom data, we must therefore take into careful consideration if we are properly handling the compositional data.
C
In the United States, the large-scale vaccination campaign begins approximately 300 days after the initial outbreak, as previously mentioned. The vaccination rate, denoted by ν⁢(t)=10−2⁢Θ⁢(t−300)𝜈𝑡superscript102Θ𝑡300\nu(t)=10^{-2}\Theta(t-300)italic_ν ( italic_t ) = 10 start_POSTSUPERSCRIPT - 2 end_POSTSUPERSCRIPT roman_Θ ( italic_t - 300 ), is extrapolated based on the administration of 3.53.53.53.5 million vaccine doses per day on average during the peak period of vaccination, spanning from t=300𝑡300t=300italic_t = 300 to t=700𝑡700t=700italic_t = 700[29, 39]. We analyze the time required to achieve epidemic control, herein referred to as the ”epidemic control time,” by comparing it against various hypothetical vaccination rates. It’s important to note that ”the end of the pandemic” in this context does not signify complete disease eradication, as mentioned in Ref. 28, but rather is defined as the point beyond which the proportion of infected individuals consistently remains below a certain threshold, related to the country’s medical capacity. In this article, this threshold is set at 10−3superscript10310^{-3}10 start_POSTSUPERSCRIPT - 3 end_POSTSUPERSCRIPT. The choice of this threshold value is informed by the number of hospital beds per thousand people, which is 2.82.82.82.8 in the U.S.A.[27]. This threshold is depicted as a green dashed line in Fig.2(a).
Similar to subsection IV.1, country 1 initiates the vaccination of its own population starting on day 300. The decision on when to share half of the vaccination rate with country 2 is left to country 1. In an extremely benevolent scenario where country 1 opts to share from the outset (on day 300), the infection rate βi⁢(t)subscript𝛽𝑖𝑡\beta_{i}(t)italic_β start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ( italic_t ) versus time in Fig.6(a) reveals two new native waves in both countries.
The graph depicts the epidemic control time as a function of vaccine allocation time in both country 1 and country 2. In scenarios 1 and 2, illustrated in Fig.4(a) and Fig.4(b), and scenarios 3 and 4, shown in Fig.4(c) and Fig.4(d), respectively, the trend is examined. From the purple line in both Fig.4(a) and Fig.4(b), we see that if the country 1 has immediately allocated vaccines since its acquisition of the vaccines on the 300th day, the epidemic control times of both countries are the same, i.e. on the 1168th day for the first scenario without mutual migrations. However, if country 1 starts to distribute vaccine resources after the 300th day, the epidemic control time of country 1 would be shortened as expected, whereas the control time in country 2 would be significantly prolonged.
We modify the two coupled Susceptible-Infected-Recovered-Deceased (SIRD) model initially developed by J. Burton et. al.[28] to illustrate the impact of vaccination and migration on the measles outbreak within specific subpopulations in Cameroon. This model is adapted to examine the temporal progression of the COVID-19 pandemic in two hypothetical countries with potential migratory connections. In our study of COVID-19, two key model parameters, namely the infection rate and vaccination rate, are made time-dependent. The time-varying infection rate accounts for the impact of non-pharmaceutical interventions in curbing virus transmission and the emergence of pathogenic variants. Likewise, the time-varying vaccination rate reflects the commencement of large-scale vaccination efforts approximately one year after the pandemic’s onset, with different rates employed to simulate scenarios involving vaccine sharing. The determination of the infection rate, initial vaccination rate magnitude, and other time-independent parameters within the model is achieved by comparing actual data sourced from the United States[29], chosen for its reliability and comprehensiveness during the early stages of the pandemic, with numerical outcomes derived from a single-country SIRD model. Population exchange coefficients are determined based on international travel data from the United States during the COVID-19 crisis.
The main takeaway is that the infection rate β⁢(t)𝛽𝑡\beta(t)italic_β ( italic_t ) observed in other countries exhibits a similar pattern and characteristics[12] as demonstrated in Fig.1(a) for the U.S.A., albeit with different functional forms and durations of sub-peaks. Subsequently, we apply the model parameters derived from the U.S.A., with certain modifications outlined in the following section, to two hypothetical countries. Country 1 initiates its vaccination campaign on day 300, while the commencement day and vaccination rate for Country 2 are determined by Country 1, mirroring a scenario where Country 1 shares its vaccine resources with Country 2.
D
To showcase how harmonic persistent homology can be used in multi-omics analyses, we analyzed a set of 690690690690 breast cancer samples from the TCGA database for which both RNAseq and Methylation450 data are present. The dataset is comprised of 414414414414 Luminal-A, 141141141141 Luminal-B, and 135135135135 basal-like samples, and we considered 28,4952849528,49528 , 495 genes and 363,791363791363,791363 , 791 methylation sites for a total of 392,286392286392,286392 , 286 features. We concatenated the RNAseq and Methylation450 data and projected them to a 100100100100-dimensional space using PCA. We built a Vietoris-Rips complex using distance correlation and computed its 1111-dimensional persistent homology up to a maximum filtration value of 0.750.750.750.75. We then computed the harmonic representatives of all 66666666 bars longer than 0.070.070.070.07 (0.970.970.970.97 quantile). For each representative we extracted the harmonic weights on the samples, resulting in a 690×6669066690\times 66690 × 66 weights matrix, depicted in Figure 5(a). Single-linkage clustering on the samples’ weights revealed a large cluster of mostly basal-like samples, whose characteristic are shown in Figure 5(b). It is important to note that this cluster was found using harmonic persistent homology on samples in an unsupervised manner, without any knowledge of the nine descriptors shown on top of Figure 5(a).
Intuitively, harmonic persistent homology establishes relationships between features or observations in the data that may lead to the discovery of hidden patterns and/or novel insights owing to the capability of TDA to analyze data at different scales. Moreover, harmonic persistent homology is naturally equipped to analyze high order interactions between data points, enabling explorations beyond simple pairwise interactions by analyzing homological features in dimensions higher than one. A schematic of our framework is given in Figure 1. In this manuscript, we show how harmonic persistent homology brings to light novel relationships and deeper comprehension of biological data that can help generate new hypotheses using multi-omics data. Harmonic persistent homology has the ability to assign weights on the simplices and therefore in the feature space that it spans. These weights can be used to scale the distribution of multi-omics variables providing an additional information on importance of those variables. Due to the absence of methods and data that addresses the precise problem, it is hard to perform a benchmark comparison of the different tools reported in literature. Therefore, in this manuscript, we demonstrate the capability of harmonic persistent homology by reproducing and expanding on known results from the literature. In particular, using different datasets, such as bulk RNA from CLL patients [9], single cell RNA (scRNA) data from Richter Syndrome (RS) and CLL patients  [14], and multi-omics lung adenocarcinoma (LUAD) and breast cancer (BRCA) data from The Cancer Genome Atlas (TCGA) database111https://www.cancer.gov/tcga, we validate the patterns found by our framework using external clinical knowledge and expected baseline patterns.
The clusters can be used to externally validate our findings and show how different harmonic cycles capture interactions between different subsets of data as the cycles with similar weights cluster together samples with similar descriptors. This approach can be extended to other multi-omics data, leading to a nuanced, data-driven discovery of novel subgroups of patients along with their associated biomarkers.
Omics studies have gained substantial importance in unraveling interactions between biomarkers that underlie complex diseases given the increasing availability of such data from across modalities due to recent technological advances [8]. These biomarkers are instrumental in clinical decision-making and drug discovery. However, extracting them from complex data can often be a challenge with the high dimensionality of these datasets as well as the heterogeneous molecular profiles of patients, disease subtypes, and other biases that plague epidemiological studies [12]. Furthermore, identifying biomarkers that share similar underlying biological pathways or molecular profiles among patients can be an even bigger impediment that still need investigation and novel analytical tools. One of the most robust approaches that analyzes data with a new perspective is Topological Data Analysis (TDA). TDA has been shown as a useful tool for the analysis of omics data [3, 5, 10, 22, 20, 16, 15, 19, 13, 21], and it enables the identification of (complex) multiway high order relationships in the data.
Here we utilize the fact that harmonic cycles maximize the contribution of essential simplices, and this paper is the first application of harmonic persistent homology to biological problems. We introduce a framework that uses harmonic persistent homology to extract information from multi-omics data that enables the discovery of hidden structures in data that have the potential to inform clinical questions. Harmonic persistent homology overcomes a challenge in TDA by enabling us to systematically map the topological features uniquely to the input data and additionally, the associated weights capture the importance of the biological markers. It does so even when the data size is small, as we see in one of the applications. We applied harmonic persistent homology in a variety of scenarios in cancer with distinct questions such as subtype prediction, unsupervised subtype detection and biomarker discovery in multi-omics data. In lung cancer, we showed that, harmonic weight rescaled features improved disease subtype prediction accuracy as compared to the baseline; in breast cancer we used harmonic weights on samples to discover a basal-like cluster; and in Venetoclax treatment response dataset, we discovered biologically relevant genes just with 11 samples. In conclusion, harmonic persistent homology has the potential of therapeutic implications for complex diseases, extending the breadth of applications of TDA in biological and healthcare data.
B
In addition to the above points, please give each figure file a name which indicates the number of the figure it contains; for example, figure1.eps, figure2a.eps, etc. If the figure file contains a figure with multiple parts, for example figure 2(a) to 2(e), give it a name such as figure2a_2e.eps, and so forth.
which is the ‘master’ LATEX file that reads in all of the other ones by naming it appropriately. The ‘master’
For a long equation which has to be split over more than one line the first line should start at the left margin, this is achieved by inserting \fl (full left) at the start of the line. The use of the alignment parameter & is not necessary unless some secondary alignment is needed.
Although it is possible to choose a font other than Computer Modern by loading external packages, this is not recommended.
by section is obtained, e.g. (2.1), (2.2), etc. Equation numbering by section is used in appendices automatically when the \appendix command is used, even if sequential numbering has been used in the rest of the article.
C
Table 2. Prediction RMSE for annual corn yield using 10%, 20%, 50%, and 100% randomly selected training labels.
respiration (Rh), net ecosystem exchange (NEE), and crop yield for 10,335 synthetic sample locations in the United States from the years 2000-2020. All synthetic data are used for pre-training. To reduce computational load, we randomly sample 1/7 of synthetic data in the pre-training phase. We use the observational data from 2000-2017 for training and the data from 2018-2020 for testing.
Besides the true observed yield labels, we use the physics-based Ecosys model (zhou2021quantifying, ) to simulate ecosystem autotrophic respiration (Ra), ecosystem heterotrophic
For semantic recognition, we utilize a separate language model to embed the obtained textual description and then create additional network layers (e.g., long-short term memory (LSTM)) to capture data dependencies. The use of the language model on textual descriptions enables better capturing of the nature and semantics of input features. To further enhance the embedding performance of the language model on environmental descriptions, we pre-train the semantic recognition component using abundant simulated samples generated by physics-based models. This pre-training process also helps the model better learn the general physical relationships encoded in the physics-based models and mitigate the challenge posed by the sparse observations in tuning the model.
where ciysuperscriptsubscript𝑐𝑖𝑦c_{i}^{y}italic_c start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_y end_POSTSUPERSCRIPT denotes the column name for the observed labels (e.g., observed water temperature), cjysuperscriptsubscript𝑐𝑗𝑦c_{j}^{y}italic_c start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_y end_POSTSUPERSCRIPT denotes the column name for the observed labels from the neighboring location j𝑗jitalic_j (e.g., observed upstream water temperature), and ∪\cup∪ represents the concatenation operation across the sequences.
B
We define enhancer annotation as a binary classification task. Given a sequence of gene-adjacent genomic DNA that contains enhancers, a binary label indicating whether it is part of an enhancer needs to be predicted for each segment of 128bp.
Enhancers are short, noncoding segments that contribute to regulating gene expression. They can be located anywhere from a few thousand to a million bp away from their target gene and work by being brought into physical proximity to the gene’s promoter. Their annotation is a highly challenging task that requires detection of long-range interactions.
The genome contains genes, segments that are transcribed to RNA molecules and potentially translated to proteins. Protein-coding genes are structured as introns and exons. For expression, a gene is first transcribed to a pre-mRNA molecule, and introns are removed via splicing. This combines the exons to one contiguous sequence that encodes the protein. Flanking nucleotides in the RNA that do not code for the protein are called untranslated regions (UTRs) and can have regulatory function. In addition, genes are associated with regulatory regions such as promoters, enhancers, silencers and insulators that modulate their expression. Some elements, such as promoters, may lie in close proximity to the start of the gene, the transcription start site (TSS). Others can appear multiple thousands bp away from the gene, but mediate their effect by physical proximity.
While this task already proves to be highly challenging for current models at the given length scales, we note that biology is even more complex, with enhancers potentially being millions of bp away.
their TSS. Enhancers can be thousands of bp away from the gene. DNA is wrapped around histone proteins and densely packed as a chromosome.
A
The BPS (Big-Small Patch)[33] method, introduced in a recent publication, utilizes a non-parametric model for the identification of spatially variable genes in 2D or 3D spatial transcriptomics data. The approach involves taking normalized spatial transcriptomics data as input. It defines big and small patches for each spatial spot based on neighboring spots with larger or smaller radii, respectively. The method then calculates local means of gene expression for both big and small patches. Following this, it calculates the ratio between the variances of local means for each gene, approximating a log-normal distribution for the distribution of these ratios. Subsequently, a p-value is determined for each gene based on this approximated distribution.
We systematically reviewed recently developed frameworks for identifying spatially variable genes and grouped them into different categories and delved into the unique aspects of their models and underlying principles. Here, we provide a brief discussion encompassing various facets, including preprocessing steps, modeling frameworks, inference techniques, scalability, and practical applicability of these frameworks. We explored the performance of select methods as reported in previously published papers. Nevertheless, it is essential to note that we refrained from conducting evaluations based solely on the number of SVGs detected or the trade-off between statistical power and FDR. This decision arises from the fact that the methods discussed in this paper often serve different research objectives, each tailored to specific research questions. For example, a method primarily focused on spatial clustering may yield similar outcomes when considering the top 100 genes versus the top 110 genes. In contrast, a method geared toward accurately identifying genuine SVGs and scrutinizing individual SVGs to glean deeper insights into biological mechanisms may prioritize stringent control of false discovery rates, making it a pivotal concern in their evaluation. The evaluation criteria must align with the unique goals and nuances of each method, akin to comparing apples to oranges when attempting to gauge their performance solely based on the number of SVGs selected.
Furthermore, model-free techniques, in many cases, do not analytically control FDR, making it challenging to establish a specific cutoff for selecting SVGs. Many methods claim to detect more SVGs than others, often undetected by alternative methods. However, the mere detection of more SVGs does not necessarily indicate the superiority of a framework if it does not effectively control the FDR. If the goal is to pinpoint the top k𝑘kitalic_k (say 1000) SVGs for subsequent analysis without the necessity of precisely quantifying detection uncertainty, these methods can be employed. However, for a more rigorous approach, it is crucial to implement stringent FDR control measures to prevent false discoveries. In our empirical analysis, we observed that numerous methods exhibit elevated false positive rates with inflated p-values (data not shown). There is an urgent demand for the development of more rigorous statistical approaches to enhance false positive control.
Various methods have been developed for multiplicity correction (MC) to address this concern. Some methods analytically constrain the false discovery rate (FDR) to remain below a predetermined threshold, while others do not analytically control the FDR and simply select a user-specified number of top genes as SVGs. Researchers may choose a method that aligns better with their research goals and the type of downstream analysis they intend to perform. In Table 3, we present an overview of these methods, organized around these critical questions. The permutation-based method is usually considered as the golden standard method as it is purely data-driven and distribution free. However, it is the least scalable one since it is computationally more demanding. The FDR-based methods have been the commonly applied ones since they offer type I error control while maintaining high power compared to the Bonferroni method. Nevertheless, depending on the downstream analysis goal, it is not necessary to strictly enforce the MC rule. For example, when the goal is to find the low dimensional embedding of genes, such as in spatial PCA analysis [34] people usually choose top ranked genes for further analysis. In such cases, strictly enforcing MC is not needed.
We have previously discussed both model-based and model-free methods for detecting SVGs. The mathematical models employed for capturing the data generation process and the innovative model-free SVG detection technique have proven valuable for uncovering significant SVGs that offer critical biological insights. However, from a statistical perspective, concerns arise regarding the potential for false discoveries of genes that lack genuine spatial variability. This concern becomes more pronounced when a large number of genes are simultaneously tested across most frameworks. If the false discovery rate or type 1 error is not adequately controlled, it may lead to incorrect conclusions and the selection of numerous genes that exhibit false spatial variability.
D
Early methods of molecular representation learning primarily included the topological-based approaches and physicochemical-based approaches. Topological-based methods describe molecules by analyzing the chemical bond connections between atoms in molecular structures, with the Extended Connectivity Fingerprints (ECFP) [5] being one of the most classic methods. ECFP converts the neighbor information around atoms into fixed-length binary strings as feature representations. On the other hand, physicochemical-based methods describe molecular structures by calculating their physicochemical properties, such as charge states, polarity, electron affinity, ionization energy and etc. These methods require prior calculations of physicochemical properties, which are then used as feature representations. Examples include Molecular Quantum Mechanics (MQM), Molecular Mechanics (MM), Density Functional Theory (DFT) [6], etc. However, early methods lacked a deep understanding of molecular structures and the ability to capture complex molecular features, resulting in certain limitations in molecular property prediction tasks.
The input of the Graph modality provides the model with detailed information about the molecular structure, such as the types of chemical bonds and atom types. This explicit information transfer enables the model to intuitively understand the microscopic aspects of the molecule. However, it is essential to note that in graph neural networks, each layer attempts to update the feature representation of nodes by aggregating information from their neighbors. This aggregation often leads to high similarity between nodes, resulting in over-smoothing, which limits its expressive power when dealing with complex structures. In this sense, the Graph modality can be considered a form of local modality, emphasizing the expression of microscopic details. In contrast, the input of the Image modality does not provide direct guidance on details like atomic chemical bonds. Although the model is unaware of the fine-grained details of the molecular structure, the global nature of convolutional operations allows each pixel to perceive the overall information of the image. This global perspective helps the model better comprehend the overall context without concerning itself with microscopic details, thereby avoiding the issue of over-smoothing. Therefore, the Image modality can be viewed as a form of global modality, focusing on capturing the overall structure.
In recent years, the advent of Graph Neural Networks (GNNs) has brought about remarkable advancements in an array of graph-related tasks [7], subsequently inspiring their application to the learning of molecular structures. Central to the concept of a molecular structure-based GNN model is the perception of the topological structure of atoms and bonds within molecules as a graph, where atoms and chemical bonds correspond to nodes and edges respectively. Initial feature sets are formulated based on their inherent physicochemical properties such as atom type, bond type, and aggregation operations are executed through the iterative exchange of information amongst neighboring nodes [8]. In contrast to traditional descriptor-based methods, GNNs can encapsulate a more extensive set of molecular features, including but not limited to local interactions and cyclic structures, thereby enhancing the precision of predictions. To date, numerous GNN-based molecular representation learning methods have been proposed, such as the Message Passing Neural Networks (MPNN) [9], and Attentive FP [4], among others.
Early methods of molecular representation learning primarily included the topological-based approaches and physicochemical-based approaches. Topological-based methods describe molecules by analyzing the chemical bond connections between atoms in molecular structures, with the Extended Connectivity Fingerprints (ECFP) [5] being one of the most classic methods. ECFP converts the neighbor information around atoms into fixed-length binary strings as feature representations. On the other hand, physicochemical-based methods describe molecular structures by calculating their physicochemical properties, such as charge states, polarity, electron affinity, ionization energy and etc. These methods require prior calculations of physicochemical properties, which are then used as feature representations. Examples include Molecular Quantum Mechanics (MQM), Molecular Mechanics (MM), Density Functional Theory (DFT) [6], etc. However, early methods lacked a deep understanding of molecular structures and the ability to capture complex molecular features, resulting in certain limitations in molecular property prediction tasks.
Simultaneously, graph contrastive learning [10] has been applied to the field of molecular representation learning with the development of GNNs, compensating for the scarcity of labeled molecular data and significantly promoting the development of this field. Existing molecular representation learning methods based on graph contrastive learning usually adopt a graph enhancement strategy for molecules. However, data augmentation strategies in the molecular field are not straightforward due to the specific chemical rules and constraints of molecular structures, which may necessitate additional domain knowledge and experience to design suitable molecular augmentation strategies. In addition to viewing molecules as topological views of nodes and edges, molecules can also be presented in the form of images [11, 12, 13].
B
While it might be expected that reduced ruggedness implies increased accessibility, we will see that this is not
Since theoretical work on models of structured fitness landscapes is largely restricted to the case of binary
understanding of evolutionary accessibility in probabilistic models of fitness landscapes. We distinguish between random
It is instructive to begin the discussion of structured landscapes with the seemingly trivial case of an
the work on structured landscapes is that ruggedness does not generally correlate with accessibility in a simple way.
A
Does contrastive SSL prove a valid training method for extracting robust and meaningful PCG representations?
Following the above rationale, in our study we select to implement and evaluate the effectiveness of a total of 6 different augmentations, each of which is described below:
kiyasseh_clocs_2021 , implementing attention mechanisms oh_lead-agnostic_2022 or combining wavelet transformations and random
If so, which augmentations or transformations lead to such representations, proving the most effective, and which actually inhibit training?
from a single source or on low quality signals fail to generalize to previously unseen data distributions, which do not adhere to the i.i.d. assumption (i.e
C
In this configuration, both the tumor’s origin point 𝐱→0=(x0,y0,z0)subscript→𝐱0subscript𝑥0subscript𝑦0subscript𝑧0\vec{\mathbf{x}}_{0}=(x_{0},y_{0},z_{0})over→ start_ARG bold_x end_ARG start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT = ( italic_x start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT , italic_y start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT , italic_z start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT ) and the parameters associated with tumor dynamics D,ρ,R𝐷𝜌𝑅D,\rho,Ritalic_D , italic_ρ , italic_R are treated as unknowns that needs to be inferred.
ODIL is a framework that addresses the challenges of solving inverse problems. It works by discretizing the PDE of the forward problem and using machine learning tools like automatic differentiation and popular deep learning optimizers (ADAM/L-BFGS) to minimize its residual while maintaining its sparse structure.
The GliODIL framework, which utilizes multi-modal data and leverages PDEs for data-driven solution regularization to capture complex dynamics yet remains tunable with limited data, significantly outperforms models strictly governed by PDEs in forecasting tumor recurrence, as well as surpassing the uniform margin approaches that represent standard clinical practice. This underscores its considerable potential for solving diverse inverse problems in biology and highlights its promising prospects for widespread application. Moving forward, to advance research into tumor dynamics and the customization of treatment approaches, we provide access to a dataset that includes MRI images from 152 glioblastoma patients, 58 of whom have undergone pre-treatment FET-PET scans.
In addressing the inverse problem of glioma modeling, we compare our results with those obtained from the Covariance Matrix Adaptation Evolution Strategy (CMA-ES) method, detailed in [27, 28], which relies on numerous simulations to identify parameters that best fit the data, and the Learn-Morph-Infer (LMI) technique [22], employing a deep learning framework for tumor growth model parameter estimation. For clarity, solutions using the forward PDE finite difference method or finite element method with parameters obtained through CMA-ES and LMI are labeled as PDECMA-ESsubscriptPDECMA-ES\text{PDE}_{\text{CMA-ES}}PDE start_POSTSUBSCRIPT CMA-ES end_POSTSUBSCRIPT and PDELMIsubscriptPDELMI\text{PDE}_{\text{LMI}}PDE start_POSTSUBSCRIPT LMI end_POSTSUBSCRIPT, respectively, emphasizing that they directly solve the PDE.
The primary metric for evaluating the model’s efficacy is its accuracy in predicting tumor recurrence within the post-surgical radiation volume. The metric does not account for factors such as the extent of surgical resection or the impact of the radiotherapy that was administrated already to the patient. Nevertheless, it offers valuable insights into the model’s potential to inform personalized radiotherapy planning by identifying tumor cell distribution beyond visible margins. This is particularly relevant for glioblastoma, where recurrences often occur adjacent to the resection cavity. We introduce a critical metric, Recurrence Coverage [%], detailed in Section 4.5. This metric quantifies the percentage of follow-up MRI-detected recurrences, segmented and encompassed within a plan’s radiation target. To ensure a fair comparison between the clinical practice of applying uniform safety margins (1.5 cm around the tumor core, adjusted for brain boundaries) and our GliODIL model’s outputs, we ensured that the total radiotherapy volume, as represented in the 3D volume of treatment plans, remained constant across all models for each patient. This consistency in radiation volume is crucial when interpreting the comparative figures. In Figure 4, we illustrate both the clinical margins plan (using distance isolines) referred here as the Standard Plan and our GliODIL plans (using tumor cell concentration isolines). Our later discussed findings indicate that GliODIL outperforms all studied PDE models, highlighting the advantages of loosening the stringent PDE constraints found in conventional forward PDE simulations. This flexibility is particularly beneficial in radiotherapy planning, aiming to accurately pinpoint likely tumor locations by striking a delicate balance between empirical data and tumor growth equations. In demonstrating the contrast between models adhering strictly to tumor growth PDEs and GliODIL, Figure 4 reveals that although PDEGliODILsubscriptPDEGliODIL\text{PDE}_{\text{GliODIL}}PDE start_POSTSUBSCRIPT GliODIL end_POSTSUBSCRIPT might surpass other PDE-strict methods in Recurrence Coverage, it faces challenges in complex tumor scenarios where PDEs inadequately capture the reality, occasionally missing certain tumor recurrences. Conversely, GliODIL effectively adjusts for equation discrepancies by integrating additional tumor cells in areas with high PET signal intensity via its data-driven component, thereby significantly improving recurrence coverage.
A
End of preview. Expand in Data Studio
README.md exists but content is empty.
Downloads last month
5

Collection including liangzid/robench2024b_all_setq-bioSCP-p