RedTachyon commited on
Commit
36f0c94
1 Parent(s): d82f86d

Upload folder using huggingface_hub

Browse files
LTAdaRM29K/10_image_0.png ADDED

Git LFS Details

  • SHA256: d914b17abe07b4a985a02213eca5a183431f2b899044d8afc39d80a9ad64205a
  • Pointer size: 130 Bytes
  • Size of remote file: 76.3 kB
LTAdaRM29K/10_image_1.png ADDED

Git LFS Details

  • SHA256: 5b9cca496dfe854ff3c8c83d2e6c28daa648d326e64139423a8bbc15b19abd46
  • Pointer size: 130 Bytes
  • Size of remote file: 15.6 kB
LTAdaRM29K/12_image_0.png ADDED

Git LFS Details

  • SHA256: b64389308996cb499d3f7f48c523505120ba11e11161283e2be5c16949a131b3
  • Pointer size: 131 Bytes
  • Size of remote file: 109 kB
LTAdaRM29K/1_image_0.png ADDED

Git LFS Details

  • SHA256: e9545cb747f2523b4402c60d7629aebe591bbc23d45c7e36dcb62e3dc09808cf
  • Pointer size: 130 Bytes
  • Size of remote file: 17.3 kB
LTAdaRM29K/20_image_0.png ADDED

Git LFS Details

  • SHA256: 98243ec6a58d1ed728fd58a1dab3519fdac0b10b9da71b2322f072a55fb1fd47
  • Pointer size: 130 Bytes
  • Size of remote file: 24.1 kB
LTAdaRM29K/20_image_1.png ADDED

Git LFS Details

  • SHA256: 515f3572e28654ae4ebd402f94c355e5040cb86e751ec68b381f62f2fed4f707
  • Pointer size: 129 Bytes
  • Size of remote file: 9.1 kB
LTAdaRM29K/21_image_0.png ADDED

Git LFS Details

  • SHA256: 312b8b27f46059b9ccefc55321e172b9bfebd83acf931a27510ace28892552db
  • Pointer size: 130 Bytes
  • Size of remote file: 94.2 kB
LTAdaRM29K/22_image_0.png ADDED

Git LFS Details

  • SHA256: 14893f4e7a7dcc62ad60e3243a783a10d59ae81dec9ed23a5f06faff8e534515
  • Pointer size: 130 Bytes
  • Size of remote file: 11.2 kB
LTAdaRM29K/23_image_0.png ADDED

Git LFS Details

  • SHA256: 9f12eb666dc1be75d38ed64b102f599fb9529dd47e2b089812db9e3907485f2a
  • Pointer size: 130 Bytes
  • Size of remote file: 97.9 kB
LTAdaRM29K/24_image_0.png ADDED

Git LFS Details

  • SHA256: 9f37b8ce84e646ba27431602fde786d0cc4bd2890853bebec23a6f804d8473a9
  • Pointer size: 130 Bytes
  • Size of remote file: 11.7 kB
LTAdaRM29K/24_image_1.png ADDED

Git LFS Details

  • SHA256: c90a56ff08a845eebcf6333eb3614cdf8dc152ac95a0dcc3e607811f2c3e0e21
  • Pointer size: 130 Bytes
  • Size of remote file: 84.3 kB
LTAdaRM29K/27_image_0.png ADDED

Git LFS Details

  • SHA256: ca8d8733b70c062a23da50f6b9b6a081e1ecf39ee28a2a0c0328352cda38ddff
  • Pointer size: 130 Bytes
  • Size of remote file: 42.3 kB
LTAdaRM29K/5_image_0.png ADDED

Git LFS Details

  • SHA256: 4e44ef378d101303d7580a08771f958b8a5a527a073eb6765d86a090b15b3016
  • Pointer size: 130 Bytes
  • Size of remote file: 23.2 kB
LTAdaRM29K/7_image_0.png ADDED

Git LFS Details

  • SHA256: da39d308e01347b6c782cb890a843092d83d7624697875ba932fdbcc1f1520b0
  • Pointer size: 130 Bytes
  • Size of remote file: 16.5 kB
LTAdaRM29K/8_image_0.png ADDED

Git LFS Details

  • SHA256: 0d22ecdd9f61de573ddf3f01681db012f8d10a9485668c94672c60b24952e41d
  • Pointer size: 130 Bytes
  • Size of remote file: 17.4 kB
LTAdaRM29K/8_image_1.png ADDED

Git LFS Details

  • SHA256: 7601260d4c22c24b29feb7c99b9e317a5b7185dfc6365f88b4d1e9b262e963cf
  • Pointer size: 130 Bytes
  • Size of remote file: 13.2 kB
LTAdaRM29K/9_image_0.png ADDED

Git LFS Details

  • SHA256: 4e2bf00a22cec543581081c3c689e2412cf76caa3d9c153185882caac82b14a7
  • Pointer size: 130 Bytes
  • Size of remote file: 28.7 kB
LTAdaRM29K/LTAdaRM29K.md ADDED
@@ -0,0 +1,807 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Recognition Models To Learn Dynamics From Partial Observations With Neural Odes
2
+
3
+ Mona Buisson-Fenet mona.buisson@minesparis.psl.eu Ansys Research Team, Ansys France Centre Automatique et Systèmes, Mines Paris - PSL University Institute for Data Science in Mechanical Engineering - RWTH Aachen University Valery Morgenthaler *valery.morgenthaler@ansys.com* Ansys Research Team, Ansys France Sebastian Trimpe *trimpe@dsme.rwth-aachen.de* Institute for Data Science in Mechanical Engineering - RWTH Aachen University Florent Di Meglio florent.di_meglio@minesparis.psl.eu Centre Automatique et Systèmes, Mines Paris - PSL University Reviewed on OpenReview: *https: // openreview. net/ forum? id= LTAdaRM29K*
4
+
5
+ ## Abstract
6
+
7
+ Identifying dynamical systems from experimental data is a notably difficult task. Prior knowledge generally helps, but the extent of this knowledge varies with the application, and customized models are often needed. Neural ordinary differential equations can be written as a flexible framework for system identification and can incorporate a broad spectrum of physical insight, giving physical interpretability to the resulting latent space. In the case of partial observations, however, the data points cannot directly be mapped to the latent state of the ODE. Hence, we propose to design recognition models, in particular inspired by nonlinear observer theory, to link the partial observations to the latent state.
8
+
9
+ We demonstrate the performance of the proposed approach on numerical simulations and on an experimental dataset from a robotic exoskeleton.
10
+
11
+ ## 1 Introduction
12
+
13
+ Predicting the behavior of complex systems is of great importance in many fields. In engineering, for instance, designing controllers for robotic systems requires accurate predictions of their evolution. The dynamic behavior of such systems often follows a certain structure. Mathematically, this structure is captured by differential equations, e.g., the laws of physics. However, even an accurate model cannot account for all aspects of a physical phenomenon, and physical parameters can only be measured with uncertainty. Datadriven methods aim to enhance our predictive capabilities for complex systems based on experimental data.
14
+
15
+ We focus on dynamical systems and design an end-to-end method for learning them from experimental data. We investigate State-Space Models (SSMs), which are common in system theory, as many modern control synthesis methods build on them and the states are often amenable to physical interpretation. For many systems of interest, some degree of prior knowledge is available. It is desirable to include this knowledge into the SSM. To this end, we consider neural ordinary differential equations (NODEs), which were introduced by Chen et al. (2018) and have since sparked significant interest, e.g., Zhong et al. (2020); Rubanova et al.
16
+
17
+ (2019). Their aim is to approximate a vector field that generates the observed data following a continuoustime ODE with a neural network. Their formulation is general enough to avoid needing a new design for each new system, but can also enforce a wide range of physical insight, allowing for a meaningful and interpretable
18
+
19
+ ![1_image_0.png](1_image_0.png)
20
+
21
+ Figure 1: SSMs can include a broad spectrum of physical knowledge. On the left, purely data-based formulations such as latent NODEs are general but tend to violate physical principles and have trouble generalizing. On the right, parametric models can be identified from data: they extrapolate well but are system-specific and require expert knowledge. One can bridge this gap by including the available physical knowledge in an NODE formulation (2), in particular "regularizing" priors (extra terms in the cost function) or "structural" priors (constraints or form of the optimization problem).
22
+
23
+ model. Specific approaches have been proposed to include different priors, which we briefly recall; we present a unified view and include them in the proposed end-to-end framework.
24
+
25
+ Learning an SSM satisfying these priors amounts to learning the dynamics in a specific set of coordinates.
26
+
27
+ However, experimental data is typically only partial, as not all of these coordinates, or *states*, are measured.
28
+
29
+ This is a common problem in machine learning, where the existence of an underlying state that can explain the data is often assumed. In the systems and control community, estimating this underlying state from partial observations is known as state estimation or observer design. An observer is an algorithm that estimates the full latent state given a history of partial measurements and control inputs Bernard (2019); Bernard et al. (2022). While observer design provides the theoretical framework for latent state estimation, such as convergence guarantees of the estimate to the true state, it has not received much attention in the machine learning community. Hence, we propose to leverage concepts from nonlinear observer design to learn NODEs with physical priors from partial observations.
30
+
31
+ We design so-called *recognition models* to map the partial observations to the latent state. We discuss several approaches, in particular based on a type of nonlinear observers called Kazantzis-Kravaris/Luenberger (KKL) observers (Kazantzis & Kravaris, 1998; Andrieu & Praly, 2006). We show that the KKL-based recognition models perform well and have desirable properties, e.g., a given size for the internal state. Such recognition models can then be embedded in the NODE formulation or any other optimization-based system identification algorithm. Our main contributions can be summarized as follows:
32
+ - We formulate structured NODEs as a flexible framework for learning dynamics from partial observations, which enables enforcing a broad spectrum of physical knowledge;
33
+ - We introduce recognition models to link observations and latent state, then propose several forms based on nonlinear observer design;
34
+ - We compare the proposed recognition models in a simulation benchmark;
35
+ - We apply the proposed approach to an experimental dataset obtained on a robotic exoskeleton, illustrating the possibility of learning a physically sound model of complex dynamics from real-world data.
36
+
37
+ Combining these yields an end-to-end framework for learning physical systems from partial observations.
38
+
39
+ ## 2 Related Work
40
+
41
+ The proposed method is based on two research areas: nonlinear observer design and machine learning for dynamical systems. We give an overview of the main trends and the most related methods on these topics.
42
+
43
+ ## 2.1 System Theory
44
+
45
+ In system theory, many subfields are concerned with the study of dynamical systems from experimental data. System identification The area of system identification aims at finding a possible dynamics model given a finite amount of partial measurements (Ljung, 1987; Nelles, 2001; Schoukens & Ljung, 2019). For linear systems, a suitable set of system matrices can be identified using subspace methods (Viberg, 1995). For nonlinear systems, most state-of-the-art techniques aim at estimating the variables of a given parametric model using Bayesian parameter estimation (Galioto & Gorodetsky, 2020) or optimization-based methods
46
+ (Schittkowski, 2002; Raue et al., 2013; Villaverde et al., 2021), or a decomposition of its dynamics on a suitable basis of functions (Sjöberg et al., 1995). These classical methods tend to be system-specific: they require expert knowledge to construct a parametric model or precisely pick the hypothesis class in which it will be approximated. NODEs are a general tool for system identification in case no parametric model is available, in which a broad range of physical knowledge can be included by adapting the formulation. Observer design When identifying a state-space model from partial observations, the unknown latent state must be estimated. This is the objective of state observers or estimators, which infer the state from observations by designing an auxiliary system driven by the measurement (see Bernard (2019); Bernard et al.
47
+
48
+ (2022) for an overview). Observers often assume an accurate dynamics model, but designs that can deal with imperfect models are also available. In that case, the unknown parts of the dynamics can be overridden through high-gain or sliding-mode designs to enable convergence (Buisson-Fenet et al., 2021; Shtessel et al.,
49
+ 2016). Otherwise, the unknown parameters can be seen as extra states with constant dynamics, and extended state observers can be designed, such that the estimated state and parameters converge asymptotically (Praly et al., 2006). Some concepts from observer theory can be leveraged to improve upon existing approaches for learning dynamics from partial observations, which require estimating the unknown latent state.
50
+
51
+ ## 2.2 Learning Dynamical Systems
52
+
53
+ Learning dynamical systems from data is also investigated in machine learning (Legaard et al., 2021; NguyenTuong & Peters, 2011). We focus on settings considered realistic in system identification, i.e., methods that allow for control and partial observations, and can ensure certain physical properties.
54
+
55
+ Physics-aware models The dynamics models obtained from machine learning often struggle with generalization and do not verify important physical principles such as conservation laws. Therefore, there have been efforts to bring together machine learning and first principles to learn physics-aware models; see Wang
56
+ & Yu (2021) for an overview of these efforts in deep learning. In general, there are two takes on including physical knowledge in data-driven models, as illustrated in Fig. 1. On the one hand, "regularizing" priors can be included by adding terms to the cost function to penalize certain aspects. The most common case is when a prior model of the system is available from first principles. We can then learn the residuals of this prior model, i.e., the difference between the prior predictions and the observations, while penalizing the norm of the residual model to correct the prior only as much as necessary. This is investigated in Yin et al. (2021); Mehta et al. (2020) for full state observations. Other quantities can be known a priori and enforced similarly, such as the total energy of the system (Eichelsdörfer et al., 2021) or stability through a Lyapunov-inspired cost function (Schlaginhaufen et al., 2021). On the other hand, structural properties can be enforced by constraints or a specific form of the optimization problem. This yields a harder problem, but improves the performance and interpretability of the model. This line of work originates from Lutter et al.
57
+
58
+ (2019); Greydanus et al. (2019); Cranmer et al. (2020) (see Zhong et al. (2021) for an overview) and has been extended to NODEs for Hamiltonian and port-Hamiltonian systems (Zhong et al., 2020; Massaroli et al.,
59
+ 2020a; Zakwan et al., 2022), but also to enforce more general energy-based structures (Manek & Zico Kolter, 2019; Course et al., 2020) or the rules of electromagnetism (Zhu et al., 2019). However, little previous work on NODEs assumes partial and noisy measurements of system trajectories.
60
+
61
+ There exist various other methods to learn physics-aware dynamics models, such as Bayesian approaches, in which prior knowledge can be enforced in the form of the kernel (Wu et al., 2019), by structural constraints
62
+ (Geist & Trimpe, 2021; Rath et al., 2021; Ensinger et al., 2022), or by estimating the variables of a parametric model (Galioto & Gorodetsky, 2020). In this paper, we focus on NODEs, which leverage the predictive power of deep learning while enforcing a broad range of physical knowledge in the problem formulation. Partial observations Most NODE frameworks for dynamical systems assume full state measurements. Partial observations greatly increase the complexity of the problem: the latent state is unknown, leading to a large number of possible state-space representations. In this case, the question of linking the observations with the latent state needs to be tackled. In Bayesian approaches, the distribution over the initial state can be directly conditioned on the first observations then approximated by a so-called recognition model
63
+ (Eleftheriadis et al., 2017; Doerr et al., 2017; 2018). Such an approach has also been used for Bayesian extensions of NODEs, where the NODE describes the dynamics of the latent state while the distribution of the initial latent variable given the observations and vice versa are approximated by encoder and decoder networks (Yildiz et al., 2019; Norcliffe et al., 2021a). The encoder network, which links observations to latent state by a deterministic mapping or by approximating the conditional distribution, can also be a Recurrent Neural Network (RNN) (Rubanova et al., 2019; Doyeon et al., 2021; de Brouwer et al., 2019; Rubanova et al.,
64
+ 2019) or an autoencoder (Bakarji et al., 2022). The particular case in which the latent ODE is linear and evolves according to the Koopman operator (that can be jointly approximated) is investigated in Lusch et al.
65
+
66
+ (2018); Bevanda et al. (2021). In general, little insight into the desired latent representation is provided.
67
+
68
+ This leads to difficulties for the obtained models to generalize, but also with their interpretability: often in a control environment, the states should have a physical meaning. Therefore, we propose to learn a recognition model that maps the observations to the latent state, while enforcing physical knowledge in the latent space.
69
+
70
+ ## 3 Problem Statement
71
+
72
+ Consider a general continuous-time nonlinear system
73
+
74
+ $$\begin{array}{l l}{{\dot{x}(t)=f(x(t),u(t))\qquad}}&{{\qquad y(t)=h(x(t),u(t))+\epsilon(t)}}\\ {{x(0)=x_{0}}}\end{array}$$
75
+ $$(1)$$
76
+
77
+ where x(t) ∈ R
78
+ dx is the state, u(t) ∈ R
79
+ du is the control input, y(t) ∈ R
80
+ dy is the measured output, and f, h are the true dynamics and measurement functions, assumed continuously differentiable. We denote x˙(t) the derivative of x w.r.t. time t, and generally omit the time dependency. We only have access to partial measurements y corrupted by noise ϵ, the control input u, and the measurement function h: the dynamics f and the state x are unknown. We assume that the solutions to (1) are well-defined and aim to estimate f.
81
+
82
+ The aim of NODEs (Chen et al., 2018) is to learn a vector field that generates the data through an ODE,
83
+ possibly up to an input and output transformation; see Massaroli et al. (2020b) for an overview. While the interpretation of this formulation for general machine learning tasks remains open, it is very natural for SSMs:
84
+ it amounts to approximating the dynamics with a neural network. However, there is no unifying framework for applying NODEs to dynamical systems in realistic settings, i.e., with partial and noisy trajectory data, with a control input, and using all available physical knowledge; we present one in this paper.
85
+
86
+ Assume we have access to N measured trajectories indexed by j, denoted y j and sampled at times ti, i ∈ {0*, ..., n* − 1}. We approximate the true dynamics f with a neural network fθ of weights θ. If the initial conditions x j 0 are known, learning fθ can be formulated as the following optimization problem:
87
+
88
+ $$\min_{\theta}\frac{1}{2d_{y}nN}\sum_{j=1}^{N}\sum_{i=0}^{n-1}\left\|y^{j}(t_{i})-\underline{y}^{j}(t_{i})\right\|_{2}^{2}$$ $$s.t.\dot{x}^{j}=f_{\theta}(x^{j},u^{j})\qquad y^{j}=h(x^{j},u^{j})$$ $$x^{j}(0)=x_{0}^{j},$$
89
+ $${\mathrm{(2)}}$$
90
+
91
+ where the constraint is valid for all j ∈ {1*, ..., N*}. Several methods have been proposed to compute the gradient of (2); see Schittkowski (2002); Alexe & Sandu (2009); Massaroli et al. (2020b) for details. We opt for automatic differentiation through the numerical solver (torchdiffeq by Chen et al. (2018)).
92
+
93
+ This problem is not well-posed: for a given state trajectory, there exist several state-space representations f that can generate the data. This is known as the unidentifiability problem (Aliee et al., 2021). The key problems to obtain meaningful solutions to (2) are (i) enforcing physical knowledge to learn a state-space representation that not only explains the data, but is also physically meaningful, and (ii) dealing with partial observations, i.e., unknown latent state and in particular unknown x0. Problem (i) has been addressed in the literature for some particular cases, as presented in Sec. 2.2, by including the available physical knowledge in the form of "regularizing" or "hard" priors (see Fig. 1). We denote the general approach of adapting (2) to build physics-aware NODEs as *structured NODEs*, and apply it to examples with varying prior knowledge.
94
+
95
+ Problem (ii) remains largely open. For fixed fθ and u, each prediction made during training for a given measured trajectory is determined by the corresponding x0; hence, estimating this initial state is critical.
96
+
97
+ We tackle (ii) by designing recognition models in Sec. 4, which is our main technical contribution. We combine them with structured NODEs to simultaneously address (i) and (ii). This yields an end-to-end framework for system identification based on physical knowledge and partial, noisy observations, a common setting in practice; we apply it to several examples in Sec. 5. While some of the components needed for this framework exist in the literature, e.g., on building physics-aware NODEs (Sec. 2.2), the vision of recognition models, the presentation in a unified framework and the application to relevant practical cases are novel.
98
+
99
+ Remark 1 The considered setting is generally regarded as realistic in system identification, since y is measured by sensors, and u *is chosen by the user, who often knows which part of the state is being measured.*
100
+ However, if that is not the case, it is always possible to train an output network hθ *jointly with* fθ.
101
+
102
+ ## 4 Recognition Models
103
+
104
+ In the case of partial observations, the initial condition x0 in (2) is unknown and needs to be estimated jointly with fθ. Estimating x0 from partial observations is directly related to state estimation: while observers run forward in time to estimate the state asymptotically, we formulate this recognition problem as running backward in time to estimate the initial condition. Therefore, the lens of observer design is well suited for investigating recognition models, though it has not been often considered. For example, whether the state can be estimated from the output is a precise notion in system theory called observability (Bernard, 2019):
105
+ Definition 1 Initial conditions xa, xb are uniformly distinguishable in time tc *if for any input* u : R 7→ R
106
+ du
107
+
108
+ $$y_{a,u}(t)=y_{b,u}(t)\ \forall\ t\in[0,t_{c}]\Rightarrow x_{a}=x_{b},$$
109
+ ya,u(t) = yb,u(t) ∀ t ∈ [0, tc] ⇒ xa = xb, (3)
110
+ where ya,u (resp. yb,u*) is the output of* (1) given input u and initial condition xa (resp. xb*). System* (1) is observable (in tc*) if all initial conditions are uniformly distinguishable.*
111
+ Hence, if (2) is observable, then x(0) is uniquely determined by y and u over [0, tc] for tc *large enough*. This assumption is necessary, otherwise there is no hope of learning fθ from the observations only.
112
+
113
+ In system identification, the unknown initial condition is usually optimized directly as a free variable: it needs to be optimized again for each new trajectory and cannot be used as such for prediction. Instead, we propose to estimate it from the observations by learning a recognition model ψθ that links the output to the initial state. We design ψθ as a neural network and denote its input z¯(tc), to be described in the following. Concatenating the weights of fθ and ψθ into θ leads to the modified problem:
114
+
115
+ $$\begin{array}{r l}{\operatorname*{min}_{\theta}}&{{}{\frac{1}{2d_{y}n N}}\sum_{j=1}^{N}\sum_{i=0}^{n-1}\left\|y^{j}(t_{i})-{\underline{{{y}}}}^{j}(t_{i})\right\|_{2}^{2}}\\ {s.t.}&{{}{\dot{x}}^{j}=f_{\theta}(x^{j},u^{j})\qquad y^{j}=h(x^{j},u^{j})}\\ {}&{{}{x^{j}(0)=\psi_{\theta}({\bar{z}}^{j}(t_{c})).}\end{array}$$
116
+ $$\left({\mathfrak{3}}\right)$$
117
+
118
+ $$\left(4\right)$$
119
+ 2(4)
120
+
121
+ ## 4.1 General Approaches
122
+
123
+ Some recognition methods have been proposed in the literature, not necessarily for system identification with NODEs, rather for probabilistic (Doerr et al., 2017) or generative Yildiz et al. (2019) models. We draw inspiration from them and rewrite them to fit into our general framework, leading to the following.
124
+
125
+ ![5_image_0.png](5_image_0.png)
126
+
127
+ $$\left(5\right)$$
128
+
129
+ Figure 2: Illustration of the method: the KKL observer runs backward over the observations on [tc, 0], the initial latent state is estimated, then the NODE runs forward and predicts the following trajectory.
130
+
131
+ 28
132
+ Direct method The most straightforward approach is to stack the observations and learn a mapping from
133
+
134
+ $${\bar{z}}(t_{c})={\underline{{y}}}_{0:t_{c}}:=({\underline{{y}}}(0),\ldots,{\underline{{y}}}(t_{c}))$$
135
+
136
+ := (y(0)*, . . . , y*(tc)) (5)
137
+ to the initial latent state. For nonautonomous systems, the first inputs should also be taken into account, which yields z¯(tc) = (y0:tc
138
+ , u0:tc
139
+ ). We denote this as the direct method. Variants of this approach has been used for approximating the distribution over the initial state conditioned on y0:tc
140
+ , e.g., for joint inference and learning of Gaussian process state-space models (Eleftheriadis et al., 2017; Doerr et al., 2017; 2018).
141
+
142
+ Augmentation strategies for NODEs (Dupont et al., 2019; Massaroli et al., 2020b; Chalvidal et al., 2021; Norcliffe et al., 2021b) are often particular cases with tc = 0. However, for many nonlinear systems, this is too little information to estimate x(0). There are few works on NODE-based system identification from partial observations, some of which train a recognition model from y−tc:0 (Ayed et al., 2020; Yildiz et al., 2019; Norcliffe et al., 2021a), or learn the dynamics of y0:tc
143
+ (Schlaginhaufen et al., 2021).
144
+
145
+ As justified by the observability assumption, for tc large enough this is all the information needed to estimate x(0). However, for large tc, the input dimension becomes arbitrarily high, so that optimizing ψθ is more difficult. Also, the observations are not preprocessed in any way, though they may be noisy.
146
+
147
+ Recurrent recognition models Latent NODEs (Chen et al., 2018; Rubanova et al., 2019) also use a recognition model to estimate the initial latent state from observations, though this may not be the state of an SSM. This recognition model is a Recurrent Neural Network (RNN) in Chen et al. (2018) and a RNN combined with a second NODE model in Rubanova et al. (2019). These methods filter the information contained in (y0:tc
148
+ , u0:tc
149
+ ) in backward time then feed it to the recognition network ψθ, which is trained jointly with the (ODE-)RNN. We consider this baseline in the numerical results: we combine a Gated Recurrent Unit (GRU) of internal dimension dz, run in backward time so that z¯(tc) is the output of the GRU, and an output network ψθ. We denote this method from Chen et al. (2018) as RNN+.
150
+
151
+ We now propose a novel type of recognition model based on nonlinear observer design, leading to different choices of z¯(tc). See Table 1 for a summary of the proposed recognition methods.
152
+
153
+ ## 4.2 Kkl-Based Recognition Models
154
+
155
+ Nonlinear observer design is concerned with the estimation of the state of nonlinear SSMs from partial observations; see e.g., Bernard et al. (2022); Bernard (2019) for an overview. A particular method that has recently gained interest is the Kazantzis-Kravaris/Luenberger (KKL) observer (Kazantzis & Kravaris, 1998; Andrieu & Praly, 2006). Intuitively, KKL observers rely on building a linear filter of the measurement: an auxiliary dynamical system of internal state z with known dimension dz is simulated, taking the measurement as input and filtering it to extract the information it contains. The observer state verifies
156
+
157
+ $$\dot{z}=D z+F\underline{{{y}}}\qquad\qquad z(0)=z_{0}$$
158
+ z˙ = Dz + F y z(0) = z0 (6)
159
+ where z ∈ R
160
+ dz with dz = dy(dx+1) and z0 is an arbitrary initial condition. In this system, y is the continuoustime measurement from (1), or alternatively an interpolation between the discrete observations y(ti). The
161
+
162
+ $$\left(6\right)$$
163
+
164
+ parameters D and F are chosen such that D is Hurwitz, i.e., all eigenvalues are in the left half-plane, and (*D, F*) is a controllable pair, i.e., the matrix *F DF . . . D*dz−1Fhas full rank (Kailath, 1980).
165
+
166
+ Thanks to the stability of (6), the internal state z "forgets" its arbitrary initial condition z0 and converges asymptotically to a value that is uniquely determined by the history of y. Under certain conditions, this value uniquely determines, in turn, the value of the unmeasured state x.
167
+
168
+ More precisely, if there exists an injective transformation from x to z, denoted T , and its left inverse T
169
+ ∗,
170
+ then for any arbitrary z0, the estimate xˆ(t) = T
171
+ ∗(z(t)) converges to x(t). This yields that x(t) ≃ T ∗(x(t))
172
+ for t large enough. The existence of T is studied separately for autonomous and nonautonomous systems, under mild assumptions, mainly x(t) ∈ X compact ∀ t and (1) is backward distinguishable, i.e., Definition 1 in backward time1: the current state x(t) is uniquely determined by y and u over [t − tc, t] for some tc.
173
+
174
+ Autonomous systems For autonomous systems, i.e., u = 0, it is shown by Andrieu & Praly (2006) that if the eigenvalues of D have sufficiently large negative real parts, then there exists an injective transformation T and its left inverse T
175
+ ∗such that2
176
+
177
+ $$\|x(t)-{\mathcal{T}}^{*}(z(t))\|\underset{t\to\infty}{\longrightarrow}0,$$
178
+ $$\left(7\right)$$
179
+ 0, (7)
180
+ meaning that T
181
+ ∗(z(t)) with z(t) from (6) is an observer for x(t). However, T
182
+ ∗cannot be computed analytically in general. Therefore, it has been proposed to learn it from full-state simulations (da Costa Ramos et al., 2020; Buisson-Fenet et al., 2022) or to directly learn an output predictor h ◦ T ∗from partial observations (Janny et al., 2021). Running the observer (6) backward in time3 on ytc:0 yields x(0) ≈ T (z(0))) for tc large enough. Hence, we propose to train a recognition model x(0) = ψθ(z(0)), where z(0) is the result of (6) run backward in time for tc from an arbitrary initial condition z(tc). This is further denoted as the KKL method, illustrated in Fig. 2 (see Table 1 for a summary of the proposed methods).
183
+
184
+ Nonautonomous systems When extending the previous results to nonautonomous systems, T not only depends on z(t) but also on time, in particular on the past values of u, and becomes injective for t ≥ tc with tc from the backward distinguishability assumption (Bernard & Andrieu, 2019). In the context of recognition models, this dependency on u over [0, t] can be made explicit by running the observer (6) in backward time then training a recognition model x(0) = ψθ(¯z(tc)) with z¯(tc) = (z(0), utc:0). This is still denoted as the KKL method, for nonautonomous systems.
185
+
186
+ If the signal u can be represented as the output of an auxiliary system of inner state ω with dimension dω, it is shown in Spirito et al. (2022) that the static observer
187
+
188
+ $$\dot{z}=D z+F\left(\frac{y}{u}\right)\qquad\qquad z(0)=z_{0}$$
189
+ $$({\mathfrak{s}})$$
190
+ z(0) = z0 (8)
191
+ leads to the same results as for autonomous systems. The time dependency in T disappears at the cost of a higher dimension: dz = (dy + du)(dx + dω + 1). This functional approach leads to an alternative recognition model denoted KKLu: x(0) = ψθ(z(0)) where z is solution of (8) simulated backward in time, and dω is chosen large enough to generate u (e.g., dω = 3 for a sinusoidal u).
192
+
193
+ Optimizing D jointly With any KKL-based recognition model, the choice of D in (6) - resp. (8) - is critical since it controls the convergence rate of z. Hence, we propose to optimize D jointly with θ, as in Janny et al. (2021). More details are provided in the supplementary material.
194
+
195
+ For tc large enough, the transformation T
196
+ ∗ approximated by ψθ is guaranteed to exist for a known dimension dz. The KKL observer also filters the information and provides a low-dimensional input to ψθ, which is expected to be easier to train. The RNN-based recognition models are close to learning a discrete-time observer with unknown dynamics: this is similar, but provides no theoretical argument for choosing the internal dimension of the RNN, no guarantee for the existence of a recognition model in form of an RNN, no physical interpretation for the behavior of the obtained observer, and leads to many more free parameters.
197
+
198
+ 1If the solutions of (1) are unique, e.g., if f is C1, then distinguishability and backward distinguishability are equivalent. 2See the Appendix for technical details.
199
+
200
+ 3We simulate z backward in time, so that all samples can be used for data fitting after being inputted to ψθ.
201
+
202
+ | Method | z¯(tc) for autonomous | z¯(tc) for nonautonomous |
203
+ |----------|-------------------------|------------------------------------------|
204
+ | tc = 0 | y(0) | (y(0), u(0)) |
205
+ | direct | y 0:tc | (y 0:tc , u0:tc ) |
206
+ | RNN+ | GRU over y tc:0 | GRU over (y tc:0, utc:0) |
207
+ | KKL | KKL over y tc:0 | KKL over y tc:0, concatenated with u0:tc |
208
+ | KKLu | n/a | functional KKL over (y tc:0, utc:0) |
209
+
210
+ Table 1: Summary of the proposed recognition methods for autonomous and nonautonomous systems, all run backward in time. The recognition model ψθ is trained with x(0) = ψθ(¯z(tc)); see Sec. 4 for details.
211
+
212
+ ![7_image_0.png](7_image_0.png)
213
+
214
+ Figure 3: Learning NODEs without structure, with recognition models direct, RNN+, KKL, and KKLu for the last, nonautonomous system. We compare the RMSE over the predicted output for 100 test trajectories.
215
+
216
+ ## 5 Experiments
217
+
218
+ We demonstrate the ability of the proposed approach to learn dynamics from partial observations with varying degrees of prior knowledge, illustrated in Fig. 1. We first compare the different recognition models in combination with NODEs without priors. We then provide an extensive case study on the harmonic oscillator, often used in the literature, learning its dynamics from partial measurements with increasing priors. Finally, we apply our approach to a real-world, complex use case obtained on a robotic exoskeleton4. All models are evaluated w.r.t. their prediction capabilities: given (y0:tc
219
+ , u0:tc
220
+ ) for a number of test trajectories, we estimate the initial state, predict the further output, then compute the RMSE over this predicted output.
221
+
222
+ ## 5.1 Benchmark Of Recognition Models
223
+
224
+ We demonstrate that the proposed recognition models can estimate the initial condition of a system from partial and noisy observations. We simulate three systems, the underlying physical state serves as ground truth. The first is a simplified model of the dynamics of a two-story building during an earthquake (Winkel, 2017; Karlsson & Svanström, 2019) with dx = 4, dy = 1. The second is the FitzHugh-Nagumo model, a simplified representation of a spiking neuron subject to a constant stimulus (Clairon & Samson, 2020) with dx = 2, dy = 1. The third is the Van der Pol oscillator with dx = 2, dy = 1. More details are provided in the supplementary material. The dynamics models are free: this corresponds to the left end of Fig. 1.
225
+
226
+ We train ten direct, RNN+, KKL and KKLu recognition models as presented in Sec. 4.2 and in Table 1.
227
+
228
+ The recognition models estimate x(0) from the information contained in y0:tc for the first, (y0:tc
229
+ , u0) for the second system, where u0 is the value of the stimulus, and (y0:tc
230
+ , u0:tc
231
+ ) for the third system. We use N = 50 trajectories of 3 seconds; the output is corrupted by Gaussian noise, and the hyperparameters are chosen to be coherent between the methods and enable a fair comparison. For evaluation, we randomly select 100 test trajectories (also 3 s) and compute the RMSE over the predicted output. The results presented in Fig. 3 indicate that the more compressed structure from observer design helps build a more effective recognition
232
+
233
+ 4Implementation details are provided in the supplementary material, code to reproduce the experiments is available at https://anonymous.4open.science/r/structured_NODEs-7C23.
234
+
235
+ ![8_image_0.png](8_image_0.png)
236
+
237
+ Figure 4: NODE and recognition model for the earthquake model, for different lengths of tc (in time steps).
238
+
239
+ ![8_image_1.png](8_image_1.png)
240
+
241
+ Figure 5: NODE and recognition model for the Van der Pol oscillator, for different values of σ 2 ϵ
242
+ .
243
+ model. The direct method with tc = 0 was run but is not shown here since it leads to much higher error, as it does not verify the observability assumption.
244
+
245
+ ## 5.1.1 Ablation Studies
246
+
247
+ In the previous benchmark, we choose all hyperparameters such that the comparison is as fair as possible. For example, ψθ is the same neural network, the dimension of the internal recognition state is the same for the RNN+ and KKL baselines (dz of the standard KKL for autonomous systems, dz of the functional KKL for nonautonomous systems). We now investigate the impact of two of the main parameters on the performance of each approach for the full NODE models: tc and the variance σ 2 ϵ of the Gaussian measurement noise.
248
+
249
+ For the study on tc, we focus on the earthquake system. We run the same experiments as before with tc in
250
+ {5, 10, 20, 40, 60, 100} ×∆t, where ∆t = 0.03 s. As depicted in Fig. 4, when tc is too low it becomes difficult to estimate x(0) from the information contained in y0:tc
251
+ : the system is not necessarily observable. It seems the threshold of observability is around tc = 30∆t = 0.9 s, since the RMSE over the test trajectories stabilizes for higher values. For this system, the KKL method reaches the lowest error and keeps improving for higher values of tc: the higher tc, the more the observer has converged, the closer the relationship x(0) ≈ T ∗(¯z(0)) is and the easier it seems to learn ψθ. For the other methods, tc seems to have less influence once the threshold of observability is reached since there is no notion of convergence over time. For the study on σ 2 ϵ
252
+ , we focus on the Van der Pol oscillator. We test for values in {10−5, 10−4, 10−3, 10−2, 10−1} and obtain the results in Fig. 5. As expected, the higher the measurement noise variance, the higher the prediction error on the test trajectories. We again observe a threshold effect, under which further reduction of the noise variance leads to little improvement in the prediction accuracy. Note that for the KKL-based methods, we optimized D once for each noise level from the same initial value, then used this optimized value for all ten experiments. If D is only optimized for a specific noise level, then the performance is degraded at the others, for which this value might filter too much or too little.
253
+
254
+ ![9_image_0.png](9_image_0.png)
255
+
256
+ Figure 6: Structured NODEs for the harmonic oscillator, with KKL recognition and increasing priors. The true trajectory is in green, the prediction of a long trajectory (30 s) in blue to illustrate the long-term accuracy. Increasing structure yields a more interpretable, but also more accurate model, except for (e)
257
+ which solves a more open problem (a new frequency is estimated for each trajectory).
258
+
259
+ | Method | (a) | (b) | (c) | (d) | (e) |
260
+ |----------|---------------|---------------|---------------|---------------|---------------|
261
+ | direct | 0.040 (0.011) | 0.050 (0.033) | 0.035 (0.008) | 0.029 (0.005) | 0.080 (0.041) |
262
+ | RNN+ | 0.057 (0.014) | 0.055 (0.012) | 0.048 (0.003) | 0.037 (0.006) | 0.052 (0.011) |
263
+ | KKL | 0.036 (0.010) | 0.042 (0.011) | 0.036 (0.004) | 0.032 (0.003) | 0.049 (0.003) |
264
+
265
+ Table 2: Recognition models for the harmonic oscillator. We train ten models for each setting, then compute the median and interquartile range (in parentheses) of the RMSE on the predicted output for hundred test trajectories of 9 s. In most cases, KKL recognition leads to more accurate predictions.
266
+
267
+ ## 5.2 Harmonic Oscillator With Increasing Priors
268
+
269
+ We now illustrate how NODEs with KKL-based recognition can be combined with physics-aware approaches to cover different degrees of structure, as illustrated in Fig. 1. We simulate an autonomous harmonic oscillator with unknown frequency:
270
+
271
+ $$\dot{x}_{1}=x_{2}\qquad\qquad\dot{x}_{2}=-\omega^{2}x_{1},$$
272
+ $$({\mathfrak{g}})$$
273
+ 2x1, (9)
274
+ where ω 2 > 0 is the unknown frequency of the oscillator and y = x1 is measured, corrupted by Gaussian noise of standard deviation σ = 0.01. Various designs have been proposed to identify both the state and the model, i.e., the frequency, for example subspace methods, or an extended state-space model with a nonlinear observer; see Praly et al. (2006) and references therein. We demonstrate that our unifying framework can solve this problem while enforcing increasing physical knowledge. The results are illustrated in Table. 2 with a KKL recognition model, ω = 1, N = 20 trajectories of 3 s for training and 100 trajectories of 9 s for testing.
275
+
276
+ First, the NODE is trained without any structure (a) as in (4), which leads to one of many possible statespace models: it fits the observations in x1 but finds another coordinate system for the unmeasured state, as expected for general latent NODEs. It also does not conserve energy. Then, we enforce a Hamiltonian structure (b) by directly learning the Hamiltonian function Hθ(x). This leads to a dynamics model again in another coordinate system, but that conserves energy: we learn the dynamics up to a symplectomorphism
277
+ (Bertalan et al., 2019). We then impose x˙ 1 = x2 and only learn x˙ 2 = −∇Hθ(x1) (c). This enforces a particular choice of Hamiltonian dynamics, such that the obtained model conserves energy and stays in the physical coordinate system (x1 position, x2 velocity). Imposing even more structure, we only optimize the unknown frequency ω 2jointly with the recognition model, while the rest of the dynamics are considered known (d). Another possibility is to consider the extended state-space model where x3 = ω 2 has constant dynamics (e). In that case, only a recognition model needs to be trained; however, x(0) ∈ R
278
+ 3including the frequency is estimated for each trajectory, such that this formulation is much more open. Both (d) and (e),
279
+ which correspond to the right end of the spectrum in Fig. 1, also lead to energy-conserving trajectories in the physical coordinates. The results in Fig. 6 illustrate that NODEs with recognition models can incorporate gradual priors for learning SSMs from partial and noisy observations. Note that standard methods tailored to the harmonic oscillator may perform better, however, they are not as general nor as flexible.
280
+
281
+ ![10_image_0.png](10_image_0.png)
282
+
283
+ ![10_image_1.png](10_image_1.png)
284
+
285
+ Figure 7: Structured NODEs and KKL recognition on the robotics dataset. After training the NODE on trajectories of 0.2 s from a subset of the input frequencies, we also test on 52 trajectories of 2 s from other input frequencies, to evaluate generalization capabilities (cut at 1 s on plots for visibility). Computing the prediction RMSE for the different structure settings yields: 110 (a), 0.72 (b), 0.66 (c), 1.2 (d). We show one such test trajectory (x1 top row, x4 bottom row) from an unknown initial condition.
286
+
287
+ ## 5.3 Experimental Dataset From Robotic Exoskeleton
288
+
289
+ We demonstrate the performance of the proposed framework on a real-world dataset. We use a set of measurements collected from a robotic exoskeleton at Wandercraft, presented in (Vigne, 2021) and Fig. 8. This robot features mechanical deformations at weak points of the structure that are neither captured by Computer-Assisted Design modeling nor measured by encoders. These deformations, when measured by a motion capture device, can be shown to account for significant errors in foot placement. Further, they exhibit nonlinear spring-like dynamics that complicate control design. The dataset is obtained by fixing the robot basin to a wall and sending a sinusoidal excitation to the front hip motor at different frequencies. The sagittal hip angle is measured by an encoder, while the angular velocity of the thigh is measured by a gyroscope. In (Vigne, 2021), first results are obtained using linear system identification: the observed deformation is modeled as a linear spring in the hip, and this model is linearized around an equilibrium point, then its parameters are identified. These estimates are sufficient for tuning a robust controller to compensate for the deformation5. We aim to learn a more accurate model of this dynamical system of dimension dx = 4, where y = (x1, x4), by identifying the nonlinear deformation terms. We investigate three settings: no structure, imposing x˙ 1 = x2 and x˙ 3 = x4, and learning the residual of the prior linear model on top of this structure as in Yin et al. (2021), by using f = flin + fθ as the dynamics, where flin is the linear prior. In each setting, we learn from N = 265 trajectories of length 0.2 s in a subset of input frequencies. We use a recognition model with tc = 0.1 s. One short test trajectory in the trained frequency regime is shown in the supplementary material and illustrates the data fit. One longer test trajectory with an input frequency outside of the training regime is shown in Fig. 7 and illustrates the generalization capabilities in all three settings and for the linear prior model.
290
+
291
+ Figure 8:
292
+ Robotic exoskeleton by Wandercraft.
293
+
294
+ The obtained results demonstrate that structured NODEs with recognition models can identify real-world nonlinear systems from partial and noisy observations. The learned models can fit data from a complex nonlinear system excited with different input frequencies, and somewhat generalize to unseen frequencies.
295
+
296
+ 5More details are provided in (Vigne, 2021) and in the supplementary material.
297
+
298
+ | Recognition model | Short rollouts in trained regime | Long rollouts with unseen frequencies | Long rollouts with EKF: y = (x1, x4) | Long rollouts with EKF: y = x1 |
299
+ |---------------------|------------------------------------|-----------------------------------------|----------------------------------------|----------------------------------|
300
+ | direct | 0.11 | 0.58 | 0.12 | 0.44 |
301
+ | RNN+ | 0.15 | 0.56 | 0.15 | 0.37 |
302
+ | KKL | 0.17 | 0.60 | 0.12 | 0.37 |
303
+ | KKLu | 0.18 | 0.65 | 0.11 | 0.34 |
304
+
305
+ Table 3: RMSE on test trajectories for the robotics dataset while imposing x˙ 1/3 = x2/4. We trained only one model per method; hence, the results only illustrate that all methods are comparable.
306
+
307
+ The predictions are not perfect, but much more accurate than those of the prior model, as seen in Fig. 17; this is enough to be used in closed-loop tasks such as control or monitoring. Imposing x˙ 1/3 = x2/4 leads to similar performance as without structure, but a physically meaningful state-space representation that can be interpreted in terms of position and velocity. Due to the inaccurate predictions of the prior model, learning its residuals leads to lower performance.
308
+
309
+ With all levels of prior knowledge, the different recognition models lead to comparable results, as illustrated in Table 3. The direct and RNN+ methods lead to slightly lower error on the test rollouts, but also take longer to train due to the higher number of free parameters. The KKL and KKLu methods lead to similar performance. To obtain these results, the choice of the gain matrix D was critical. Rigorous analysis of the role of this parameter is beyond the scope of this article, and remains a relevant task for future work
310
+ (Buisson-Fenet et al., 2022). We also evaluate the learned model inside an Extended Kalman Filter (EKF).
311
+
312
+ The EKF is a classical state estimation tool for nonlinear systems, that takes the measurement and control as input and outputs a probabilistic estimate of the current underlying state. At each time step, it linearizes the dynamics model and output map then proceeds as a linear Kalman Filter to estimate the mean and covariance of the current state (Krener, 2003). We implement an EKF which receives either y = h(x) = (x1, x4) or only y = h(x) = x4 as the measurement, and uses the linear prior or an NODE as dynamics function. In both cases, the NODE estimates are more accurate than those obtained with the linear prior model, as shown in Fig. 9 for a long test trajectory with an input frequency outside of the training regime. As expected, the accuracy for y = (x1, x4) is high since we are directly estimating the output. However, the performance difference indicates that the NODE is much more accurate and should enable meaningful state estimation for downstream tasks. When y = x1, the EKF using the prior model is off, while it provides reasonable estimates in most frequency regimes when using the learned model.
313
+
314
+ ## 6 Conclusion
315
+
316
+ The general formulation of NODEs is well suited for nonlinear system identification. However, learning physically sound dynamics in realistic settings, i.e., with control inputs and partial, noisy observations, remains challenging. To achieve this, recognition models are needed to efficiently link the observations to the latent state. We show that notions from observer theory can be leveraged to construct such models; for example, KKL observers can filter the information contained in the observations to produce an input of fixed dimension for which a suitable recognition model is guaranteed to exist. We propose to combine recognition models and existing methods for physics-aware NODEs to build a unifying framework, which can learn physically interpretable models in realistic settings. We illustrate the performance of KKL-based recognition in numerical simulations, then demonstrate that the proposed end-to-end framework can learn SSMs from partial observations with an experimental robotics dataset. While these observer-based recognition models are demonstrated in the context of NODEs, they are a separate contribution, which can also be used in various system identification methods; they could be combined with e.g., Neural Controlled Differential Equations (Kidger et al., 2020), Bayesian extensions of NODEs (Norcliffe et al., 2021a; Yildiz et al., 2019) or in general optimization-based system identification methods (Schittkowski, 2002; Villaverde et al., 2021). The results herein illustrate that observer theory and, in particular, KKL observers are suitable for building recognition models. To the best of our knowledge, this work is the first to propose this connection, hence, it
317
+
318
+ ![12_image_0.png](12_image_0.png)
319
+
320
+ Figure 9: State estimation with an EKF on the robotics dataset. After training the NODE with KKL
321
+ recognition while imposing x˙ 1/3 = x2/4, we run the EKF on long test trajectories from unseen input frequencies. When y = (x1, x4), both prior and learned model are able to reconstitute the output (x1 top row, x4 bottom row), but the EKF with NODE performs better. When measuring only x1, the EKF using the prior model is off, while its estimates are reasonable in most frequency regimes with the learned model. also points to remaining open questions. In particular, the choice of (*D, F*) plays a role in the performance of KKL observers (Buisson-Fenet et al., 2022), and methods for tuning them are still needed; setting D to a HiPPO matrix Gu et al. (2021; 2022) could be an interesting first step. On another note, NODEs can be combined with Convolutional Neural Networks to capture spatial dependencies and learn Partial Differential Equations (PDEs), as investigated in Dulny et al. (2021); Xu et al. (2021). In this paper, we only consider dynamical systems that can be modeled by ODEs, but expect the proposed approach to extend to PDEs.
322
+
323
+ ## References
324
+
325
+ Mihai Alexe and Adrian Sandu. Forward and adjoint sensitivity analysis with continuous explicit RungeKutta schemes. *Applied Mathematics and Computation*, 208(2):328–346, 2009.
326
+
327
+ Hananeh Aliee, Fabian J. Theis, and Niki Kilbertus. Beyond Predictions in Neural ODEs: Identification and Interventions. *arXiv preprint arXiv:2106.12430*, 2021.
328
+
329
+ Vincent Andrieu and Laurent Praly. On the existence of a Kazantzis-Kravaris/Luenberger observer. *SIAM*
330
+ Journal on Control and Optimization, 45(2):422–456, 2006.
331
+
332
+ Ibrahim Ayed, Emmanuel De Bezenac, Arthur Pajot, and Patrick Gallinari. Learning the Spatio-Temporal Dynamics of Physical Processes from Partial Observations. In *Proceedings of the IEEE International* Conference on Acoustics, Speech and Signal Processing, pp. 3232–3236, 2020.
333
+
334
+ Joseph Bakarji, Kathleen Champion, J Nathan Kutz, and Steven L Brunton. Discovering Governing Equations from Partial Measurements with Deep Delay Autoencoders. *arXiv preprint arXiv:2201.05136*, 2022.
335
+
336
+ Pauline Bernard. Observer Design for Nonlinear Systems. In *Lecture Notes in Control and Information* Sciences, volume 479. Springer International Publishing, 2019.
337
+
338
+ Pauline Bernard and Vincent Andrieu. Luenberger Observers for Nonautonomous Nonlinear Systems. *IEEE*
339
+ Transactions on Automatic Control, 64(1):270–281, 2019.
340
+
341
+ Pauline Bernard, Vincent Andrieu, and Daniele Astolfi. Observer Design for Continuous-Time Dynamical Systems. *Annual Reviews in Control*, 2022.
342
+
343
+ Tom Bertalan, Felix Dietrich, Igor Mezić, and Ioannis G. Kevrekidis. On learning Hamiltonian systems from data. *Chaos*, 29(12), 2019.
344
+
345
+ Petar Bevanda, Max Beier, Sebastian Kerz, Armin Lederer, Stefan Sosnowski, and Sandra Hirche. KoopmanizingFlows: Diffeomorphically Learning Stable Koopman Operators. *arXiv preprint arXiv:2112.04085*,
346
+ 2021.
347
+
348
+ Mona Buisson-Fenet, Valery Morgenthaler, Sebastian Trimpe, and Florent Di Meglio. Joint state and dynamics estimation with high-gain observers and Gaussian process models. *IEEE Control Systems Letters*,
349
+ 5(5):1627–1632, 2021.
350
+
351
+ Mona Buisson-Fenet, Lukas Bahr, and Florent Di Meglio. Towards gain tuning for numerical KKL observers.
352
+
353
+ arXiv preprint arXiv:2204.00318, 2022.
354
+
355
+ Mathieu Chalvidal, Matthew Ricci, Rufin VanRullen, and Thomas Serre. Go with the Flow: Adaptive Control for Neural ODEs. In *Proceedings of the International Conference on Learning Representations*, 2021.
356
+
357
+ Ricky T.Q. Chen, Yulia Rubanova, Jesse Bettencourt, and David Duvenaud. Neural Ordinary Differential Equations. In *Advances in Neural Information Processing Systems 32*, pp. 6572–6583, 2018.
358
+
359
+ Quentin Clairon and Adeline Samson. Optimal control for estimation in partially observed elliptic and hypoelliptic linear stochastic differential equations. *Statistical Inference for Stochastic Processes*, 23:105–
360
+ 127, 2020.
361
+
362
+ Kevin L Course, Trefor W Evans, and Prasanth B. Nair. Weak Form Generalized Hamiltonian Learning. In Advances in Neural Information Processing Systems, 2020.
363
+
364
+ Miles Cranmer, Sam Greydanus, Stephan Hoyer, Peter Battaglia, David Spergel, and Shirley Ho. Lagrangian Neural Networks. In *ICLR 2020 Workshop on Integration of Deep Neural Models and Differential Equations*, 2020.
365
+
366
+ L da Costa Ramos, F Di Meglio, L F Figuiera da Silva, P Bernard, and V Morgenthaler. Numerical design of Luenberger observers for nonlinear systems. In Proceedings of the 59th Conference on Decision and Control, pp. 5435–5442, 2020.
367
+
368
+ Edward de Brouwer, Jaak Simm, Adam Arany, and Yves Moreau. GRU-ODE-Bayes: Continuous modeling of sporadically-observed time series. In *Advances in Neural Information Processing Systems 33*, 2019.
369
+
370
+ Andreas Doerr, Christian Daniel, Duy Nguyen-Tuong, Alonso Marco, Stefan Schaal, Marc Toussaint, and Sebastian Trimpe. Optimizing long-term predictions for model-based policy search. Proceedings of the 1st Conference on Robot Learning, 78:227–238, 2017.
371
+
372
+ Andreas Doerr, Christian Daniel, Martin Schiegg, Duy Nguyen-Tuong, Stefan Schaal, Marc Toussaint, and Sebastian Trimpe. Probabilistic recurrent state-space models. *Proceedings of the 35th International Conference on Machine Learning*, pp. 1280–1289, 2018.
373
+
374
+ Timothy Doyeon, Kim Thomas, Zhihao Luo, Jonathan W Pillow, and Carlos D Brody. Inferring Latent Dynamics Underlying Neural Population Activity via Neural Differential Equations. In Proceedings of the 38th International Conference on Machine Learning, pp. 5551–5561, 2021.
375
+
376
+ Andrzej Dulny, Andreas Hotho, and Anna Krause. NeuralPDE: Modelling Dynamical Systems from Data.
377
+
378
+ arXiv preprint arXiv:2111.07671, 2021.
379
+
380
+ Emilien Dupont, Arnaud Doucet, and Yee Whye Teh. Augmented Neural ODEs. In *Advances in Neural* Information Processing Systems, 2019.
381
+
382
+ Jonas Eichelsdörfer, Sebastian Kaltenbach, and Phaedon-Stelios Koutsourelakis. Physics-enhanced Neural Networks in the Small Data Regime. *Workshop on Machine Learning and the Physical Sciences (NeurIPS* 2021), 2021.
383
+
384
+ Stefanos Eleftheriadis, Thomas F.W. Nicholson, Marc P. Deisenroth, and James Hensman. Identification of Gaussian process state space models. In *Advances in Neural Information Processing Systems 31*, pp.
385
+
386
+ 5310–5320, Long Beach, California, 2017.
387
+
388
+ Katharina Ensinger, Friedrich Solowjow, Michael Tiemann, and Sebastian Trimpe. Structure-preserving Gaussian Process Dynamics. *arXiv preprint arXiv:2102.01606*, 2022.
389
+
390
+ Nicholas Galioto and Alex Arkady Gorodetsky. Bayesian system ID: optimal management of parameter, model, and measurement uncertainty. *Nonlinear Dynamics*, 102:241–267, 2020.
391
+
392
+ A. René Geist and Sebastian Trimpe. Structured learning of rigid-body dynamics : A survey and unified view. *GAMM-Mitteilungen*, (44:e202100009), 2021.
393
+
394
+ Sam Greydanus, Misko Dzamba, and Jason Yosinski. Hamiltonian Neural Networks. In Advances in Neural Information Processing Systems 33, 2019.
395
+
396
+ Albert Gu, Isys Johnson, Karan Goel, Khaled Saab, Tri Dao, Atri Rudra, and Christopher Ré. Combining Recurrent, Convolutional, and Continuous-time Models with Linear State-Space Layers. In *Advances in* Neural Information Processing Systems, pp. 572–585, 2021.
397
+
398
+ Albert Gu, Karan Goel, and Christopher Ré. Efficiently Modeling Long Sequences with Structured State Spaces, 2022.
399
+
400
+ Steeven Janny, Vincent Andrieu, Madiha Nadri, and Christian Wolf. Deep KKL: Data-driven Output Prediction for Non-Linear Systems. In *Proceedings of the IEEE Conference on Decision and Control*, 2021.
401
+
402
+ Thomas Kailath. *Linear Systems*. Prentice Hall PTR, Englewood Cliffs, New Jersey, 1980. Daniel Karlsson and Olle Svanström. Modelling Dynamical Systems Using Neural Ordinary Differential Equations, Master's thesis in Complex Adaptive Systems, Department of Physics, Chalmers University of Technology, 2019.
403
+
404
+ Nikolaos Kazantzis and Costas Kravaris. Nonlinear observer design using Lyapunov's auxiliary theorem.
405
+
406
+ Systems and Control Letters, 34:241–247, 1998. ISSN 01912216. doi: 10.1109/cdc.1997.649779.
407
+
408
+ Patrick Kidger, James Morrill, James Foster, and Terry Lyons. Neural controlled differential equations for irregular time series. In *Advances in Neural Information Processing Systems*, 2020.
409
+
410
+ Diederik P. Kingma and Jimmy Lei Ba. Adam: A method for stochastic optimization. In *Proceedings of the* 3rd International Conference on Learning Representations, 2015.
411
+
412
+ Arthur J. Krener. The Convergence of the Extended Kalman Filter. In *Directions in Mathematical Systems* Theory and Optimization - Lecture Notes in Control and Information Sciences, volume 286, pp. 173–182.
413
+
414
+ Springer-Verlag Berlin Heidelberg, 2003.
415
+
416
+ Christian Møldrup Legaard, Thomas Schranz, Gerald Schweiger, Ján Drgoňa, Basak Falay, Cláudio Gomes, Alexandros Iosifidis, Mahdi Abkar, and Peter Gorm Larsen. Constructing Neural Network-Based Models for Simulating Dynamical Systems. *ACM Computing Surveys*, 1(1), 2021.
417
+
418
+ Lennart Ljung. *System Identification: Theory for the User*. Prentice Hall PTR, Englewood Cliffs, New Jersey, 1987.
419
+
420
+ Bethany Lusch, J. Nathan Kutz, and Steven L. Brunton. Deep learning for universal linear embeddings of nonlinear dynamics. *Nature Communications*, 9, 2018.
421
+
422
+ Michael Lutter, Christian Ritter, and Jan Peters. Deep Lagrangian Networks: Using Physics as Model Prior for Deep Learning. In *Proceedings of the International Conference on Learning Representations*, 2019.
423
+
424
+ Gaurav Manek and J. Zico Kolter. Learning stable deep dynamics models. In *Advances in Neural Information* Processing Systems, 2019.
425
+
426
+ Stefano Massaroli, Michael Poli, Michelangelo Bin, Jinkyoo Park, Atsushi Yamashita, and Hajime Asama.
427
+
428
+ Stable Neural Flows. *arXiv preprint arXiv:2003.08063*, 2020a.
429
+
430
+ Stefano Massaroli, Michael Poli, Jinkyoo Park, Atsushi Yamashita, and Hajime Asama. Dissecting neural ODEs. In *Advances in Neural Information Processing Systems*, pp. 3952–3963, 2020b.
431
+
432
+ Viraj Mehta, Ian Char, Willie Neiswanger, Youngseog Chung, Jeff Schneider, Andrew Oakleigh Nelson, Mark D Boyer, and Egemen Kolemen. Neural Dynamical Systems. In *International Conference on Learning* Representations - Integration of Deep Neural Models and Differential Equations Workshop, 2020.
433
+
434
+ Oliver Nelles. *Nonlinear System Identification*. Springer, Berlin, Heidelberg, 2001. Duy Nguyen-Tuong and Jan Peters. Model learning for robot control: A survey. *Cognitive Processing*, 12:
435
+ 319–340, 2011.
436
+
437
+ Alexander Norcliffe, Cristian Bodnar, Ben Day, Jacob Moss, and Pietro Liò. Neural ODE Processes. In Proceedings of the International Conference on Learning Representations, 2021a.
438
+
439
+ Alexander Norcliffe, Cristian Bodnar, Ben Day, Nikola Simidjievski, and Pietro Liò. On second order behaviour in augmented neural ODEs. In The Symbiosis of Deep Learning and Differential Equations Workshop (NeurIPS 2021), 2021b.
440
+
441
+ L Praly, L Marconi, and A Isidori. A new observer for an unknown harmonic oscillator. In *Proceedings of* the 17th International Symposium on Mathematical Theory of Networks and Systems, pp. 996–1001, 2006.
442
+
443
+ Lucas Rath, A. René Geist, and Sebastian Trimpe. Using Physics Knowledge for Learning Rigid-body Forward Dynamics with Gaussian Process Force Priors. In *Proceedings of the 5th Conference on Robot* Learning, pp. 101–111, 2021.
444
+
445
+ Andreas Raue, Marcel Schilling, Julie Bachmann, Andrew Matteson, Max Schelke, Daniel Kaschek, Sabine Hug, Clemens Kreutz, Brian D. Harms, Fabian J. Theis, Ursula Klingmüller, and Jens Timmer. Lessons Learned from Quantitative Dynamical Modeling in Systems Biology. *PLoS ONE*, 8(9), 2013.
446
+
447
+ Yulia Rubanova, Ricky T.Q. Chen, and David Duvenaud. Latent ODEs for irregularly-sampled time series.
448
+
449
+ In *Advances in Neural Information Processing Systems*, 2019.
450
+
451
+ Klaus Schittkowski. *Numerical Data Fitting in Dynamical Systems*. Springer, Boston, MA, 2002.
452
+
453
+ Andreas Schlaginhaufen, Philippe Wenk, Andreas Krause, and Florian Dörfler. Learning Stable Deep Dynamics Models for Partially Observed or Delayed Dynamical Systems. In *Advances in Neural Information* Processing Systems 35, 2021.
454
+
455
+ Johan Schoukens and Lennart Ljung. Nonlinear System Identification: A User-Oriented Roadmap. IEEE
456
+ Control Systems Magazine, 39(6):28–99, 2019.
457
+
458
+ Yuri Shtessel, Christopher Edwards, Leonid Fridman, and Arie Levant. Sliding Mode Control and Observation. In *Control Engineering*. Birkhäuser, New York, 2016.
459
+
460
+ Jonas Sjöberg, Qinghua Zhang, Lennart Ljung, Albert Benveniste, Bernard Delyon, Pierre Yves Glorennec, Håkan Hjalmarsson, and Anatoli Juditsky. Nonlinear black-box modeling in system identification: a unified overview. *Automatica*, 31(12):1691–1724, 1995.
461
+
462
+ Mario Spirito, Pauline Bernard, and Lorenzo Marconi. On the existence of robust functional KKL observers.
463
+
464
+ In *Proceedings of the American Control Conference*, 2022.
465
+
466
+ Mats Viberg. Subspace-based methods for the identification of linear time-invariant systems. *Automatica*,
467
+ 31(12):1835–1851, 1995.
468
+
469
+ Matthieu Vigne. *Estimation and Control of the Deformations of an Exoskeleton using Inertial Sensors*. PhD
470
+ thesis, Mines ParisTech - Université PSL, 2021.
471
+
472
+ Alejandro F. Villaverde, Dilan Pathirana, Fabian Fröhlich, Jan Hasenauer, and Julio R. Banga. A protocol for dynamic model calibration. *arXiv preprint arXiv:1902.11136*, 2021.
473
+
474
+ Rui Wang and Rose Yu. Physics-Guided Deep Learning for Dynamical Systems: A survey. arXiv preprint arXiv:2107.01272, 2021.
475
+
476
+ Brian Winkel. 2017-Gustafson, G. B. - Differential Equations Course Materials, 2017. URL https://www.
477
+
478
+ simiode.org/resources/3892.
479
+
480
+ Jin Long Wu, Carlos Michelén-Ströfer, and Heng Xiao. Physics-informed covariance kernel for model-form uncertainty quantification with application to turbulent flows. *Computers and Fluids*, 193, 2019.
481
+
482
+ Xingzi Xu, Ali Hasan, Khalil Elkhalil, Jie Ding, and Vahid Tarokh. Characteristic Neural Ordinary Differential Equations. *arXiv preprint arXiv:2111.13207*, 2021.
483
+
484
+ Çagatay Yildiz, Markus Heinonen, and Harri Lähdesmäki. ODE2VAE: Deep generative second order ODEs with Bayesian neural networks. In *Advances in Neural Information Processing Systems*, 2019.
485
+
486
+ Yuan Yin, Vincent Le Guen, Jérémie Dona, Emmanuel de Bézenac, Ibrahim Ayed, Nicolas Thome, and Patrick Gallinari. Augmenting physical models with deep networks for complex dynamics forecasting. In Proceedings of the 9th International Conference on Learning Representations, 2021.
487
+
488
+ Muhammad Zakwan, Loris Di Natale, Bratislav Svetozarevic, Philipp Heer, Colin N. Jones, and Giancarlo Ferrari Trecate. Physically Consistent Neural ODEs for Learning Multi-Physics Systems. *arXiv preprint* arXiv:2211.06130, 2022.
489
+
490
+ Yaofeng Desmond Zhong, Biswadip Dey, and Amit Chakraborty. Symplectic ODE-Net: Learning Hamiltonian Dynamics with Control. In *International Conference on Learning Representations*, 2020.
491
+
492
+ Yaofeng Desmond Zhong, Biswadip Dey, and Amit Chakraborty. Benchmarking Energy-Conserving Neural Networks for Learning Dynamics from Data. In Proceedings of the 3rd Conference on Learning for Dynamics and Control, volume 144, pp. 1218–1229, 2021.
493
+
494
+ Max Zhu, Jacob Moss, and Pietro Lio. Modular Neural Ordinary Differential Equations. *arXiv preprint* arXiv:2109.07359v2, 2019.
495
+
496
+ ## A More Detailed Background On Kkl Observers
497
+
498
+ We recall the main existence results on KKL observers. Correctly stating these results requires more formalism than what is used in the main body of the paper. However, the assumptions and the reasoning are identical. We start with the main existence result on autonomous systems, then recall the extension to nonautonomous systems.
499
+
500
+ ## A.1 Autonomous Systems
501
+
502
+ Consider the following autonomous nonlinear dynamical system
503
+
504
+ $${\dot{x}}=f(x),\qquad y=h(x)$$
505
+ x˙ = f(x), y = h(x) (10)
506
+ where x ∈ R
507
+ dx is the state, y ∈ R
508
+ dy is the measured output, f is a C
509
+ 1function and h is a continuous function.
510
+
511
+ The goal of observer design is to compute an estimate of the state x(t) using the past values of the output y(s), 0 ≤ s ≤ t. We make the following assumptions:
512
+ Assumption 1 There exists a compact set X *such that for any solution* x of (10), x(t) *∈ X ∀* t ≥ 0.
513
+
514
+ $$(10)$$
515
+
516
+ Assumption 2 There exists an open bounded set O containing X *such that* (10) is backward Odistinguishable on X , i .e., for any trajectories xa and xb of (10), there exists t >¯ 0 *such that for any* t ≥ t¯ such that (xa(t), xb(t)) ∈ X × X and xa(t) ̸= xb(t), there exists s ∈ [t − t, t ¯ ] *such that*
517
+
518
+ $$h(x_{a}(s))\neq h(x_{b}(s))$$
519
+
520
+ and (xa(τ ), xb(τ )) ∈ O × O for all τ ∈ [s, t]. In other words, their respective outputs become different in backward finite time before leaving O.
521
+
522
+ This is the assumption of backward distinguishability, i.e., Definition 1 but in backward time. It means that the current state is uniquely determined by the past values of the output. On the contrary, (forward)
523
+ distinguishability means that the initial state is uniquely determined by the future values of the output. If the solutions of (10) are unique, e.g., if f is C
524
+ 1, then these two notions are equivalent.
525
+
526
+ The following Theorem derived in Andrieu & Praly (2006) proves the existence of a KKL observer.
527
+
528
+ Theorem 1 ((Andrieu & Praly, **2006))** Suppose Assumptions 1 and 2 *hold. Define* dz = dy(dx + 1).
529
+
530
+ Then, there exists ℓ > 0 and a set S *of zero measure in* C
531
+ dz *such that for any matrix* D ∈ R
532
+ dz×dz with eigenvalues (λ1*, . . . , λ*dz
533
+ ) in C
534
+ dz \ S with Re λi < −ℓ*, and any* F ∈ R
535
+ dz×dx such that (D, F) *is controllable,*
536
+ there exists an injective mapping T : R
537
+ dx → R
538
+ dz*that satisfies the following equation on* X
539
+
540
+ $${\frac{\partial{\mathcal{T}}}{\partial x}}(x)f(x)=D{\mathcal{T}}(x)+F h(x),$$
541
+ ∂x (x)f(x) = DT (x) + F h(x), (11)
542
+ and its left inverse T
543
+ ∗: R
544
+ dz → R
545
+ dx *such that the trajectories of* (10) remaining in X *and any trajectory of*
546
+
547
+ $$(11)$$
548
+ $${\dot{z}}=D z+F y$$
549
+ $$(12)$$
550
+ $$(13)$$
551
+ $$(14)$$
552
+ $$(15)$$
553
+ z˙ = Dz + F y (12)
554
+ verify
555
+
556
+ $$|z(t)-{\mathcal{T}}(x(t))|\leq M\,|z(0)-{\mathcal{T}}(x(0))|\,e^{-\lambda_{\mathrm{min}}t}$$
557
+ −λmint(13)
558
+ for some M > 0 *and with*
559
+
560
+ $$\lambda_{\operatorname*{min}}=\operatorname*{min}\left\{|\operatorname{Re}\lambda_{1}|,\ldots,|\operatorname{Re}\lambda_{d_{z}}|\right\}.$$
561
+ |} . (14)
562
+ This yields
563
+
564
+ $$\operatorname*{lim}_{t\rightarrow+\infty}|x(t)-T^{*}(z(t))|=0.$$
565
+ |x(t) − T ∗(z(t))| = 0. (15)
566
+
567
+ ## A.2 Nonautonomous Systems
568
+
569
+ These results are extended to nonautonomous systems in Bernard & Andrieu (2019). The system equations are then
570
+
571
+ $${\dot{x}}=f(x,u),\qquad y=h(x,u)$$
572
+ $$(16)$$
573
+ x˙ = f(x, u), y = h(*x, u*) (16)
574
+ where u ∈ U is the input. Assumption 2 naturally extends to nonautonomous systems if it is true for any fixed input u of interest. The following Theorem proves the existence of a KKL observer in the nonautonomous case under the weak assumption of backward distinguishability.
575
+
576
+ Theorem 2 ((Bernard & Andrieu, **2019))** Take some fixed input u ∈ U. Suppose Assumptions 1 and 2 hold for this u with a certain t¯u ≥ 0*. Define* dz = dy(dx + 1). Then, there exists a set S *of zero measure* in C
577
+ dz *such that for any matrix* D ∈ R
578
+ dz×dz with eigenvalues (λ1*, . . . , λ*dz
579
+ ) in C
580
+ dz \ S with Re λi < 0, and any F ∈ R
581
+ dz×dx such that (D, F) *is controllable, there exists a mapping* Tu : R × R
582
+ dx → R
583
+ dz*that satisfies* the following equation on X
584
+
585
+ $${\frac{\partial{\mathcal{T}}_{u}}{\partial x}}(t,x)f(x,u(t))+{\frac{\partial{\mathcal{T}}_{u}}{\partial t}}(x)=D{\mathcal{T}}_{u}(t,x)+F h(x,u(t)),$$
586
+ $$(17)$$
587
+
588
+ and a mapping T
589
+
590
+ u
591
+ : R×R
592
+ dz → R
593
+ dx such that Tu(t, ·) and T
594
+
595
+ u
596
+ (t, ·) only depend on the past values of u on [0, t],
597
+ and Tu(t, ·) is injective ∀ t ≥ t¯u *with a left-inverse* T
598
+
599
+ u
600
+ (t, ·) on X *. Then, the trajectories of* (16) remaining in X *and any trajectory of*
601
+
602
+ $$(18)$$
603
+ $${\dot{z}}=D z+F y,$$
604
+
605
+ $$(19)$$
606
+ $$(20)$$
607
+ $$(21)$$
608
+ z˙ = Dz + *F y,* (18)
609
+ verify
610
+
611
+ $$|z(t)-{\mathcal{T}}_{u}(t,x(t))|\leq M\,|z(0)-{\mathcal{T}}_{u}(0,x(0))|\,e^{-\lambda_{\mathrm{min}}t}$$
612
+ −λmint(19)
613
+ for some M > 0 *and with*
614
+
615
+ $$\lambda_{\operatorname*{min}}=\operatorname*{min}\left\{|\operatorname{Re}\lambda_{1}|,\ldots,|\operatorname{Re}\lambda_{d_{z}}|\right\}.$$
616
+ |} . (20)
617
+ This yields
618
+
619
+ $$\operatorname*{lim}_{t\rightarrow+\infty}|x(t)-{\mathcal{T}}_{u}^{*}(t,z(t))|=0.$$
620
+ (*t, z*(t))| = 0. (21)
621
+ Remark 2 The literature on nonlinear observer design is wide and many types of observers have been proposed; see Bernard (2019); Bernard et al. *(2022) for an overview. The proposed recognition models are* based on KKL observers, well-suited for the recognition problem due to the existence of the transformation T
622
+
623
+ which can be approximated jointly with the dynamics. Other designs for general nonlinear systems include high-gain observers, which could lead to the same type of structure for the recognition problem. However, they are subject to peaking and tend to amplify measurement noise. An extended Kalman filter could also be used to directly estimate the trajectories using the current dynamics model. However, there is no convergence guarantee for such filters and they directly rely on a linearization of the current dynamics, hence they are sensitive to model error. We expect this to be harder to train than the KKL-based methods, which decouple recognition and dynamics models.
624
+
625
+ ## B Implementation Details
626
+
627
+ We demonstrate the proposed method in several numerical experiments and one experimental dataset obtained on a robotic exoskeleton. We investigate the direct method z¯(tc) = (y0:tc
628
+ , u0:tc
629
+ ), the RNN+ method where z¯(tc) is the output of an RNN run over (y0:tc
630
+ , u0:tc
631
+ ), the KKL method z¯(tc) = (z(tc), u0:tc
632
+ ) and the functional KKL method (denoted KKLu) z¯(tc) = z(tc) with dz = (dy + du)(dx + dω + 1). The KKL observer is run backward in time: we solve the ODE on z backward in time on [tc, 0] and learn the mapping from z(0) to x(0), then use all samples to train the NODE model. Similarly for the RNN.
633
+
634
+ In all cases, the choice of D is important for the KKL-based recognition models. For each considered system and for each considered method, we optimize D once jointly with all other parameters once, then reuse the obtained value of D for all corresponding experiments. We set F = 1dz×dy and initialize D with the following method. We compute the poles pi of a Butterworth filter of order dz and cut-off frequency 2πωc and set each block of D as
635
+
636
+ $$D_{i}=\begin{cases}p_{i}&\text{if$p_{i}$is real}\\ \left(\begin{array}{cc}\text{Re}\{p_{i}\}&\text{Im}\{p_{i}\}\\ -\text{Im}\{p_{i}\}&\text{Re}\{p_{i}\}\end{array}\right)&\text{otherwise}\end{cases}$$
637
+
638
+ such that D is a block-diagonal matrix, and its eigenvalues are the poles of the filter. This choice ensures that the pair (*D, F*) is controllable and that D is Hurwitz and has physically meaningful eigenvalues. Other possibilities exist, such as choosing D in companion form, as a negative diagonal... However, we found that this strategy leads to the best performance for the considered use cases. We pick ωc = 1 for the systems of the recognition model benchmark and the harmonic oscillator with unknown frequency. For the experimental dataset, we initialize as D = diag(−1*, . . . ,* −dz). However, this choice is somewhat arbitrary, and the previous method with ωc = 10 had similar performance. Principled methods for setting (*D, F*) are still needed for easing the practical use of KKL observers; setting D to a HiPPO matrix Gu et al. (2022; 2021) could be an interesting first step.
639
+
640
+ $$(22)$$
641
+
642
+ Note also that there is a large part of randomness in the different experiments we present. Hence, results may vary, and obtaining consistent results to rigorously compare the different methods without any statistical variations would require a large computational overhead.
643
+
644
+ ## B.1 Benchmark Of Recognition Models
645
+
646
+ We demonstrate on numerical systems that an NN-based recognition model can estimate the initial state of a dynamical system from partial measurements. For reproducibility, we generate the training data from simulation and choose systems that can be tested again with reasonable computational overhead. Earthquake model A simplified model of the effects of an earthquake on a two-story building is presented in Winkel (2017), and an NODE is trained for it in Karlsson & Svanström (2019). This linear model can be written as
647
+
648
+ $\dot{x}_1=x_2$ $\dot{x}_2=\dfrac{k}{m}(x_3-2x_1)-F_0\omega^2\cos(\omega t)$ $\dot{x}_3=x_4$ $\dot{x}_4=\dfrac{k}{m}(x_1-x_3)-F_0\omega^2\cos(\omega t)$ $y=x_1+\epsilon$, $\dot{x}_4=\delta x+\epsilon$, and $\omega=\omega^2\cos(\omega t)$.
649
+ $$(23)$$
650
+ y = x1 + ϵ, (23)
651
+ where x1 and x3 are the positions of the first and second floor respectively, x2 and x4 their velocities, F0ω 2cos(ωt) is the perturbation caused by the earthquake and only x1 is observed with Gaussian noise of variance σ 2 ϵ = 10−4. We consider the oscillation caused by the earthquake as a disturbance, which is known when simulating training trajectories and unknown to the recognition model: we estimate x(0) from y0:tc only.
652
+
653
+ We aim to learn a recognition model that estimates x(0) using only y0:tc with the methods described above.
654
+
655
+ We set tc = 40 × ∆t = 40 × 0.03 = 1.2 s which seems to be enough to reconstitute the initial condition (after trial and error), N = 50 (each sample corresponds to a random initial condition, random F0 and random ω), n = 100, and design ψθ (and eventually fθ) as a fully connected feed-forward network, i.e., a multi-layer perceptron, with two hidden layers containing 50 neurons each, and two fully connected input and output layers with bias terms. The RNN+ model is set to have the same internal dimension dz as the KKL model. We notice that tc large enough and enough parameters in ψθ, i.e., enough flexibility of the model, are needed for good generalization performance. Also, we pick the sampling time ∆t = 0.03 s low enough such that the obtained trajectories are reasonably smooth, otherwise analyzing the results quantitatively becomes hard due to interpolation errors getting too large.
656
+
657
+ We train the recognition model with each proposed method and evaluate the results on 100 test trajectories of random initial conditions and random input oscillation. The results on one such test trajectory are illustrated in Figure 10. We train two settings: either learning a full NODE model (main body of the paper), or having a known dynamics model in which only k/m, the main parameter of the dynamics model, is optimized jointly with the recognition model. In our example, we have k/m = 10, but we initialize its estimate to a random value in [8, 12]. As usual, this problem is not well-posed and there are many local optima. Therefore, we can only hope to converge to a good estimate by starting from a reasonable guess of the main parameter. We keep everything else fixed, including the optimization routine, which might not be the best choice as it has been shown that for parametric optimization, trust region optimization routines with multiple starts often lead to better results (Raue et al., 2013). For the full NODE model, we evaluate the different recognition models by computing the RMSE on the prediction of the output only, since the coordinate system for x(t) is not fixed and a different coordinate system is found in each experiment. For the parametric model, we evaluate the different recognition models by computing the RMSE on the estimation of the whole trajectory over all test scenarios, since the coordinate system for x(t) is fixed by the parametric model. The results are shown in Figure 11 and in the main body of the paper. We observe that the KKL-based models achieve higher
658
+
659
+ ![20_image_0.png](20_image_0.png)
660
+
661
+ Figure 10: Test trajectory of the parametric earthquake model with KKL recognition: the initial condition is estimated from y0:tc jointly with the model parameters.
662
+ (a) Full NODE: output error
663
+
664
+ ![20_image_1.png](20_image_1.png)
665
+
666
+ (b) Parametric model: full state error
667
+ Figure 11: Results of the obtained earthquake recognition models. We show the RMSE on the prediction of the output when a full NODE model is learned (left column) and of the whole test trajectories when a parametric model is learned (right column). Ten recognition models were trained with the methods direct
668
+ (left), RNN+ (middle) and KKL (right). The direct method with tc = 0 is not shown here for scaling, but the mean RMSE is over 0.6.
669
+
670
+ performance, which seems to indicate that the optimization problem based on z(0) is better conditioned than that based on y0:tc
671
+ .
672
+
673
+ FitzHugh-Nagumo model This model represents a relaxation oscillation in an excitable system. It is a simplified representation of the behavior of a spiking neuron: an external stimulus is received, leading to a short, nonlinear increase of membrane voltage then a slower, linear recovery channel mimicking the opening and closing of ion channels (Clairon & Samson, 2020). The dynamics are written as
674
+
675
+ $$\begin{array}{l}{{\dot{v}=\frac{1}{\epsilon}(v-v^{3}-u)+I_{e x t}}}\\ {{\dot{u}=\gamma v-u+\beta}}\\ {{y=v+\epsilon,}}\end{array}$$
676
+ $$(24)$$
677
+ y = v + ϵ, (24)
678
+ where v is the membrane potential, u is the value of the recovery channel, Iext is the value of the external stimulus (here a constant), ϵ = 0.1 is a time scale parameter, and γ = 1.5, β = 0.8 are kinetic parameters.
679
+
680
+ Only v is measured, corrupted by Gaussian measurement noise ϵ of variance σ 2 ϵ = 5 × 10−4.
681
+
682
+ Our aim is to learn a recognition model that estimates (v(0), u(0)) using y0:tc and Iext with the methods described above. We set tc = 40 × ∆t = 40 × 0.03 = 1.2 s, N = 50 for 50 random initial conditions and external stimulus, n = 100, and design ψθ (and eventually fθ) as a fully connected feed-forward network, i.e., a multi-layer perceptron, with two hidden layers containing 50 neurons each, and two fully connected input and output layers with bias terms. The RNN+ model is set to have the same internal dimension dz as the KKL model.
683
+
684
+ ![21_image_0.png](21_image_0.png)
685
+
686
+ Figure 12: Test trajectory of the parametric FitzHugh-Nagumo model: the initial condition is estimated from y0:tc jointly with the model parameters. We use direct (top), RNN+ (middle) and KKL (bottom)
687
+ recognition, on three random but similar test trajectories.
688
+ We train the recognition model with each proposed method and evaluate the results on 100 test trajectories with random initial conditions and random stimulus. The results on one such test trajectory are illustrated in Figure 12.
689
+
690
+ We either learn a full NODE model (main body of the paper) or a parametric model for which we estimate the main dynamic parameters ϵ, β and γ jointly with the recognition model, initialized randomly in [0.05, 0.15],
691
+ [0.75, 2.25] and [0.4, 1.2] respectively. We evaluate the different recognition models as above. The results are illustrated in Figure 13 and in the main body of the paper. We observe once again that the KKL-based methods lead to lower error.
692
+
693
+ Van der Pol oscillator Consider the nonlinear Van der Pol oscillator of dynamics
694
+
695
+ $${\dot{x}}_{1}=x_{2}$$
696
+ $$\begin{array}{l}{{x_{1}=x_{2}}}\\ {{\dot{x}_{2}=\mu(1-x_{1}^{2})x_{2}-x_{1}+u}}\\ {{\quad y=x_{1}+\epsilon,}}\end{array}$$
697
+ y = x1 + ϵ, (25)
698
+ where x1, x2 are the states, u = 1.2 sin(ωt) is a sinusoidal control input, and µ = 1 is a damping parameter. Only x1 is measured, corrupted by Gaussian measurement noise ϵ of variance σ 2 ϵ = 10−3.
699
+
700
+ $$(25)$$
701
+
702
+ ![22_image_0.png](22_image_0.png)
703
+
704
+ Figure 13: Results of the obtained FitzHugh-Nagumo recognition models. We show the RMSE on the prediction of the output when a full NODE model is learned (left column) and of the whole test trajectories when a parametric model is learned (right column). Ten recognition models were trained with the methods direct (left), RNN+ (middle) and KKL (right). The direct method with tc = 0 is not shown here for scaling, but the mean RMSE is over 0.4.
705
+
706
+ Our aim is to learn a recognition model that estimates x(0) using y0:tc and u0:tc with the methods described above. We set tc = 40 × ∆t = 40 × 0.03 = 1.2 s, N = 50 for 50 random initial conditions and values of ω, n = 100, and design ψθ (and eventually fθ) as a fully connected feed-forward network, i.e., a multi-layer perceptron, with two hidden layers containing 50 neurons each, and two fully connected input and output layers with bias terms. Since u is a sinusoidal control input of varying frequency, it can be generated by
707
+
708
+ $$(26)^{\frac{1}{2}}$$
709
+ ω˙ 1 = ω2, (26)
710
+ $$\begin{array}{c}{{\dot{\omega}_{1}=\omega_{2},}}\\ {{\dot{\omega}_{2}=-\omega_{3}\omega_{1},}}\\ {{\dot{\omega}_{3}=0}}\end{array}$$
711
+
712
+ and u = ω1, where ω1, ω2 are the internal states of the sinusoidal system and ω3 > 0 is its frequency. Hence, we can use the KKLu recognition model with dω = 3. The RNN+ model is set to have the same internal dimension dz as the KKLu model.
713
+
714
+ We train the recognition model with each proposed method and evaluate the results on 100 test trajectories with random initial conditions and random input frequency. The results on one such test trajectory are illustrated in Figure 14. We also consider either a full NODE model, or a parametric model for which µ is jointly estimated, after from a random initial guess in [0.5, 1.5]. The corresponding box plots are shown in Figure 15. We observe that in both settings, the performance with the different recognition models is very similar. In the main body of the paper, we show the same plot but with higher noise of variance σ 2 ϵ = 0.1 instead of σ 2 ϵ = 10−3 here. Due to the very similar performance, the hierarchy of the different recognition models slightly varies between the noise levels.
715
+
716
+ ## B.2 Synthetic Dataset: Harmonic Oscillator With Unknown Frequency
717
+
718
+ We demonstrate the performance of structured NODEs to learn the dynamics of a harmonic oscillator with unknown frequency from partial observations with varying degrees of structure. We train on N = 50 trajectories from 50 random initial states in [−1, 1]2 and frequency ω 2 = 1 Hz (i.e., period 6.3 s), of n = 50 time steps each for an overall length of 3 s, corrupted by Gaussian measurement noise of variance σ 2 = 10−4.
719
+
720
+ We use tc = 20 × ∆t = 1.2 s for the recognition model. We optimize the parameters using Adam (Kingma
721
+ & Ba, 2015) and a learning rate of 0.005.
722
+
723
+ The obtained results are depicted in Figure 16 with the KKL recognition model. We show the prediction of a random test trajectory with random initial state: y0:tc is measured for this trajectory, used by the recognition model to estimate x(0). Then, the learned NODE is simulated to predict the whole state trajectory for 500 time steps, i.e., ten times longer than the training time to illustrate the long-term behavior. For the quantitative results presented in the main body of the paper, we predict on test trajectories of 150 time steps, i.e., three times the training time. These make the long-term performance difference due to the degree
724
+
725
+ ![23_image_0.png](23_image_0.png)
726
+
727
+ Figure 14: Test trajectory of the parametric Van der Pol model: the initial condition is estimated from y0:tc jointly with the model parameters. We use direct (top), RNN+, KKL and KKLu (bottom)
728
+ recognition, on four random but similar test trajectories.
729
+ of prior knowledge less visible, but lead to more consistent and quantitatively comparable results (with the long test trajectories, the interquartile range of the experiments was very large due to error accumulation which can possibly blow up over a long prediction horizon). We train the dynamics and recognition model in each setting ten times, for recognition models direct, RNN+
730
+ and KKL. The mean RMSE over hundred test trajectories is depicted in the main body of the paper.
731
+
732
+ No structure We start without imposing any structure, i.e., learning a general latent NODE model of the system as in (4.2) The NODE fits the observations y = x1, but not x2 as it has learned the dynamics in
733
+
734
+ ![24_image_0.png](24_image_0.png)
735
+
736
+ Figure 15: Results of the obtained Van der Pol recognition models. We show the RMSE on the prediction of the output when a full NODE model is learned (left column) and of the whole test trajectories when a parametric model is learned (right column). Ten recognition models were trained with the methods direct
737
+ (left), RNN+, KKL and KKLu (right). The direct method with tc = 0 is not shown here for scaling, but the mean RMSE is over 0.6.
738
+
739
+ ![24_image_1.png](24_image_1.png)
740
+
741
+ Figure 16: Random test trajectory of the trained NODE for the harmonic oscillator: without imposing any structure (a), imposing Hamiltonian dynamics (b), imposing x˙ 1 = x2 (c), directly identifying a parametric model (d) and learning a recognition model of an extended state-space representation where x3 = ω 2(e).
742
+
743
+ We show the true and predicted trajectories of x1 (top) and x2 (bottom).
744
+
745
+ another coordinate system, which is expected for general latent NODEs. It also does not conserve energy, which is not surprising when no structure is imposed, as discussed e.g., in Greydanus et al. (2019).
746
+
747
+ Hamiltonian state-space model We now assume the user has some physical insight about the system at hand: it derives from a Hamiltonian function, i.e., there exists H such that
748
+
749
+ $$\dot{x}_{1}=\frac{\partial H}{\partial x_{2}}(x),\qquad\dot{x}_{2}=-\frac{\partial H}{\partial x_{1}}(x).\tag{27}$$
750
+
751
+ We approximate H directly with a neural network Hθ of weights θ, such that the NODE has form (27), and inject this into the optimization problem (4). This formulation enforces the constraint that the dynamics derive from a Hamiltonian function, whose choice is free. In that case, we do not necessarily find the "physical" state-space realization, as several Hamiltonian functions can fit the data. However, the obtained state-space model conserves energy due to the Hamiltonian structure.
752
+
753
+ Imposing x˙ 1 = x2 We now impose a somewhat stronger structure in (4):
754
+
755
+ $$\dot{x}_{1}=x_{2},\qquad\dot{x}_{2}=-\nabla H(x_{1}),$$
756
+ x˙ 1 = x2, x˙ 2 = −∇H(x1), (28)
757
+ where only the dynamics of x2 need to be learned. This enables the NODE to recover both the initial state and the unknown part of the dynamics in the imposed coordinates while also conserving energy, as this is a particular case of Hamiltonian dynamics with Hamiltonian function 12 x 2 2 + H(x1).
758
+
759
+ Parametric system identification We now directly learn a parametric model of the harmonic oscillator
760
+
761
+ $$\dot{x}_{1}=x_{2},\qquad\dot{x}_{2}=-\omega^{2}x_{1},$$
762
+ $$\mathrm{model}$$
763
+ $$(29)$$
764
+
765
+ where ω > 0 is the unknown frequency. We approximate ω with a parameter θ, which is initialized randomly in [0.5, 2]. We obtain excellent results with this method, as θ is estimated correctly up to 10−2 and the trained recognition model gives satisfying results. This demonstrates that our framework can recover both the dynamics and the recognition model in the physical coordinates imposed by the parametric model from partial and noisy measurements in this simple use case.
766
+
767
+ Extended state-space model We now consider the extended state-space model
768
+
769
+ $${\dot{x}}_{1}=x_{2},\qquad{\dot{x}}_{2}=-x_{3}x_{2},\qquad{\dot{x}}_{3}=0$$
770
+ x˙ 1 = x2, x˙ 2 = −x3x2, x˙ 3 = 0 (29)
771
+ where x3 = ω 2is a constant state representing the unknwon frequency. In this case, the dynamics are completely known and only the recognition model is left to train, in order to estimate the initial condition x(0) ∈ R
772
+ 3, where x3(0) is the unknown frequency. This is the same degree of structure as the parametric model: the dynamics are known up to the frequency. However, it also is a more open problem: since we learn a recognition model for x(0) ∈ R
773
+ 3, at each new trajectory, we estimate a new value of the frequency x3(0), which was considered the same across all trajectories for the previous methods. Therefore, with this setting, we also obtain models that can predict energy-preserving trajectories in the physical parameters, but with lower accuracy due to this extra degree of freedom.
774
+
775
+ ## B.3 Experimental Case Study: Robotic Exoskeleton
776
+
777
+ We use a set of measurements collected at Wandercraft on one of their exoskeletons and presented in Vigne
778
+ (2021). More details on the robot, the dataset and the methods applied at Wandercraft are provided in Section 4.1.2.1 of Vigne (2021).
779
+
780
+ For this experiment, the robot basin is fixed to the wall and low amplitude sinusoidal inputs are sent to the front hip motor with different frequencies between 2 Hz and 16 Hz. A linear model has been identified in Vigne (2021) by modeling the deformation as a linear spring in the hip motor, yielding a system with x ∈ R
781
+ 4. The angle of the hip (x1) is measured by the encoder on this motor, while a gyrometer measures the angular velocity of the thigh (x4). The measurements are sampled with ∆t = 1ms. The aim is to identify the nonlinear deformations in the hip motor, which can be seen in motion capture and cause significant errors in gait planning, but are not captured by the known models of the exoskeleton. We start by preprocessing the signal: for each input frequency and corresponding trajectory, we compute the FFT of y, apply a Gaussian window at fc = 50 Hz on the spectrum, then apply an inverse FFT and slice out the beginning and the end (100 time steps) of each signal to get rid of the border effects. For u, which is not very noisy, we rather apply a Butterworth filter of order 2 and cut-off frequency 200 Hz. We cut the long trajectories for each input frequency in slices of 200 samples, and stack these training trajectories of length 0.2 s together to form our set of training trajectories. Hence, all trajectories have the same sampling times and can be simulated in parallel easily. We choose the length of 0.2 s because it seems long enough to capture some of the dynamics even in the low-frequency regime, but also short enough to remain acceptable in the high-frequency regime.
782
+
783
+ We then run the proposed framework on this data. We directly train the NODE on a random subset of training trajectories, use a subset of validation trajectories for early stopping, and a subset of test trajectories to evaluate the performance of the learned model. When no further indications are provided, we use a recognition model of type KKL with dz = 10, F = 1dz×dy
784
+ , which yields 110 for the dimension of (z(tc), u0:tc
785
+ ).
786
+
787
+ The parameter D was optimized after being initialized at diag(−1*, . . . ,* −10). We also trained a recognition model of type KKLu with dz = 50 for which D was also optimized starting at diag(−2*, . . . ,* −100), and recognition models of type direct and RNN+ using the same information contained in (y0:tc
788
+ , u0:tc
789
+ ) and the same size of the latent state as for KKLu, i.e., dimension 50. Both recognition and dynamics models are feed-forward networks with five hidden layers of 50 respectively 100 neurons and SiLU activation.
790
+
791
+ We notice that for this complex and nonautonomous use case, the direct and RNN+ recognition methods seem easier to train. However, they also take longer due to having more parameters, and have higher generalizaiton error on the long test rollouts. We also notice that D needs to be chosen well for the KKLbased recognition models to obtain good performance, which needs to be investigated further.
792
+
793
+ Normalization is also an important aspect in the implementation: all losses and evaluation metrics are scaled to the same range, so that all loss terms play a similar role and remain within a similar range. This ensures that the values on which the optimization is based are always numerically tractable for the chosen solver.
794
+
795
+ Different scaling possibilities are discussed in Sec. 5.2.3 of Schittkowski (2002). In our case, since we do not know in advance the values that x(t) will take, we compute the mean and standard deviation of the samples in y(t) and u(t) and scale all outputs y(t) respectively inputs u(t) according to these. We also scale all states x(t) or derivatives x˙(t) using the scaler on y(t) for the dimensions that are measured (x1 and x4), and a mean of the scaler on y(t) for the other dimensions. This is not quite correct, but it is the best we can do without knowing the range of values that x(t) will take, and it is enough to ensure that all scaled values of x(t) stay within a reasonable range.
796
+
797
+ We investigate three settings: no structure, imposing x˙ 1 = x2 and x˙ 3 = x4 ("structural" prior), and learning the residuals from the prior linear model on x˙ 2 and x˙ 4 ("regularizing" prior) with λ = 5 × 10−7; we already have x˙ 1 = x2 and x˙ 3 = x4 in the prior model). For evaluating the prior linear model, we use the estimated initial states obtained by the recognition model of the last setting, in order to be in the coordinate system that corresponds to the prior. In each setting, we learn from N = 265 trajectories of a subset of input frequencies: {2.5, 3.5, 4.5, 5.5, 6.5, 7.5, 8.5, 9.5, 11, 13, 15} Hz. We then evaluate on 163 test trajectories of 0.2 s from these input frequencies, to evaluate data fitting in the trained regime. We also evaluate on 52 longer (2 s) test trajectories from other input frequencies, to evaluate the interpolation capabilities of the learned model: {2, 3, 4, 5, 6, 7, 8, 9, 10, 12, 15, 17} Hz. We use the Adam Kingma & Ba (2015) optimizer with decaying learning rate starting at 8 × 10−3for the first two settings, 5 × 10−3for the third setting.
798
+
799
+ The obtained results are described in the main body of the paper. One longer test trajectory with an input frequency outside of the training regime is presented in Fig. 7 when imposing x˙ 1 = x2 and x˙ 3 = x4. Overall, we find that structured NODEs are able to fit this complex nonlinear dynamical system using real-world data and realistic settings. The predictions of the obtained models are not perfect, but they are much better than those of the prior model, such that they could probably be used in a closed-loop control task like the linear model currently is at Wandercraft. This is confirmed by implementing an EKF that uses the learned models for state estimation (tuning fixed for all recognition models and all degrees of prior knowledge).
800
+
801
+ Adding structure leads to similar performance, but to a model that can be physically interpreted in terms of position and velocity. In the third setting (hard constraints and residuals model), the accuracy is lower.
802
+
803
+ This is due to the fact that the linear prior model leads to rather inaccurate predictions.
804
+
805
+ ![27_image_0.png](27_image_0.png)
806
+
807
+ Figure 17: Structured NODEs and KKL recognition on the robotics dataset. We test on 163 trajectories of 0.2 s from the same input frequencies as the training data to evaluate data fitting in the trained regime, and compute the RMSE: we obtain respectively 5.6 (a), 0.16 (b), 0.18 (c), 0.31 (d). We show one such test trajectory (x1 top row, x4 bottom row) from an unknown initial condition.
LTAdaRM29K/LTAdaRM29K_meta.json ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "languages": null,
3
+ "filetype": "pdf",
4
+ "toc": [],
5
+ "pages": 28,
6
+ "ocr_stats": {
7
+ "ocr_pages": 0,
8
+ "ocr_failed": 0,
9
+ "ocr_success": 0,
10
+ "ocr_engine": "none"
11
+ },
12
+ "block_stats": {
13
+ "header_footer": 28,
14
+ "code": 0,
15
+ "table": 3,
16
+ "equations": {
17
+ "successful_ocr": 60,
18
+ "unsuccessful_ocr": 3,
19
+ "equations": 63
20
+ }
21
+ },
22
+ "postprocess_stats": {
23
+ "edit": {}
24
+ }
25
+ }